[music-dsp] idealized flat impact like sound

2016-07-27 Thread gm


Hi

I want to create a signal thats similar to a reverberant knocking or 
impact sound,
basically decaying white noise, but with a more compact onset similar to 
a minimum phase signal

and spectrally completely flat.

I am aware thats a contradiction.

Both, minimum phase impulse and fading random phase white noise are 
unsatisfactory.

The minimum phase impulse does not sound reverberant.

The random phase noise isn't strictly flat anymore when you window it 
with an exponentially decaying envelope

and also lacks a knocking impression.

I am also aware that a knocking impression comes from formants and 
pronounced modes
related to shapes and material and not flat, which is another 
contradiction..


I am not sure what the signal or phase alignment is I am looking for.

Also it's not a chirp cause a chirp sounds like a chirp.

What happens in a knock/impact besides pronounced modes or formants?
Somehow the phases are aligned it seems, similar to minimum phase but 
then its

also random and reverberant.


Any ideas?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-27 Thread gm

(Hi Matt, we've met before at NI btw, briefly)

Thanks, I look these up, I think I browsed through some of this a few 
years ago, Rocchesso I think.


I already had a hammer model for the same thing, which is piano synthesis
and a soundboard model.

For the moment I am more interested in spectral flatness.
I would like to synthesize decaying white noise thats completely flat
Basically the ideal reverb response in a way.

I want to figure out what details make a piano sound sound like piano, 
and how to exaggerate or idealize these
thats one reason why I replaced hammer model and soundboard model with 
white noise for now.


(It turns out that the fluctuations in the spectrum matter, but can also 
give an interesting touch

when the noise varies with time...)

Now I want to replace it with something really flat to figure out what
role some of the modes of different real soundboards have, if any, or if 
the impact sound is more important (if it is)
and what makes that impact, perceptionally, in the case of the piano 
(where immediate collision sounds don't matter).


I also experimented with random phase noise of the soundbard spectrum vs 
the recorded impact
It makes a difference but I am not sure what it is at the moment, 
perceptionally and phase wise.
A minmum phase version of the same impact seems to sound worse to me for 
instance.

The reverberation seems quite important.

An other thing thats of importance seem to be the reflections between 
hammer

and where the string is fixed, but you dont need an impact model for this,
they can be modeled with a truncated comb filter response... thats 
already seperated out.


So my question for now is: how can we synthesize completely flat 
decaying noise?

(is it even possible?)


Am 27.07.2016 um 21:33 schrieb Matt Jackson:

There might also be something by max Matthews or Curtis Roads.
I think I recall a chapter in the computer music tutorial.

Sent from a phone.


On 27.07.2016, at 20:47, Andy Farnell  wrote:

For impact/contact exciters you will find plenty
of empirical studies and theoretical models in the
literature by;

Davide Rocchesso
Bruno Giodano
Perry Cook

These are good initial paper authors to search

all best
Andy Farnell




On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:

Hi

I want to create a signal thats similar to a reverberant knocking or
impact sound,
basically decaying white noise, but with a more compact onset
similar to a minimum phase signal
and spectrally completely flat.

I am aware thats a contradiction.

Both, minimum phase impulse and fading random phase white noise are
unsatisfactory.
The minimum phase impulse does not sound reverberant.

The random phase noise isn't strictly flat anymore when you window
it with an exponentially decaying envelope
and also lacks a knocking impression.

I am also aware that a knocking impression comes from formants and
pronounced modes
related to shapes and material and not flat, which is another
contradiction..

I am not sure what the signal or phase alignment is I am looking for.

Also it's not a chirp cause a chirp sounds like a chirp.

What happens in a knock/impact besides pronounced modes or formants?
Somehow the phases are aligned it seems, similar to minimum phase
but then its
also random and reverberant.


Any ideas?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-28 Thread gm
My problem was that a short segment of random isn't spectrally 
straigh-line flat.


If you feed this into a resonator (waveguide) you can hear a difference 
between one random grain and another with another random sequence.


This is usally a desired effect that makes the sound alive,
but in my case I wanted to eliminate this for then moment.

And it doesnt help to synthesize a flat random sequence with FFT since it's
not flat after evenveolping/windowing.

I managed to get a flatter response the follwing way:

create a segment of exp decay padded with silence
repeat:
 FFT (of double size) and set all magnitudes to 1
 iFFT
 set second half of time signal to zero
/repeat

it seems to converge to a flat aproximately exponentially decaing signal
(I have only done 20 or so iterations manually so I am not sure how it 
behaves)


Dont ask me why, maybe its just a simple genetic algorithm thing

The result is a little bit strange however, slightly metallic
and when you convolve it with it self several times you get some kind of 
multichirp soundwise
where sparse narrow dips and phase delay appears at random places across 
the spectrum


Similar to a spring or allpass chain but with multiple random "resonances"


Am 28.07.2016 um 19:17 schrieb Tito Latini:

sorry, that's a decay, so out is "(1 - y) * rand()":

T = 1 / samplerate
p = exp(log(0.001) * T / t60)

y = 1 + p*(y1 - 1)
y1 = y
out = (1 - y) * rand(-1.0, 1.0) * gain
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-28 Thread gm

to clarfy, its a segment of exp decaying *noise* to start with



I managed to get a flatter response the follwing way:

create a segment of exp decay padded with silence
repeat:
 FFT (of double size) and set all magnitudes to 1
 iFFT
 set second half of time signal to zero
/repeat


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-28 Thread gm
I used both the minimum phase version of a band limited impulse and also 
the "empty" exp curve


the exp decay curve has more energy towards DC depending on its witdh 
but you dont hear that


like the "empty" exponential decay segment and dirac impulse the minimum 
phase pulse lacks reverberation and sounds very dry and liveless when 
you dont have soundboard model, even though it can extend quite long


but it has a quick rise time like 1ms and similar to the exp decay you 
can have a longer or shorter tail with more or less energy, and the 
onset shape is similar to an very quick real impact
there is not much difference in practice though to the decay curve, it's 
just one of the things I tried

it also makes a tiny chirp but you hardly hear that



Am 28.07.2016 um 22:49 schrieb Andy Farnell:

Following the comments regarding the exponential
modulated noise segment;

My experience is that all such actual segments will be
spectrally coloured, because of course they contain
a truncated set of random values.

The only theoretically "flat" exciter is the Dirac impulse.

But because it contains so little energy its not that
practical for stimulating waveguides.

Better to construct a band-limited pulse from a finite
set of sinusoids right up to the Nyquist.
A problem is this will have a finite rise time.

A practical compromise I found is to use the exponential
decay segment, as it is, without a payload, and make it
jolly short. I guess as T -> 0 the behaviour tends towards
the Dirac pulse, but where T is just a few tens of samples
it works as a very clean, reliable exitor for waveguides.
(Indeed this is what you have in a lot of analogue percussion
synthesis)

Perhaps someone can show you what the spectrum is as
a function of T, its not "flat" but its a good trade off
between a theoretically perfect impulse and a practical
signal.

cheers,
Andy

On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:

Hi

I want to create a signal thats similar to a reverberant knocking or
impact sound,
basically decaying white noise, but with a more compact onset
similar to a minimum phase signal
and spectrally completely flat.

I am aware thats a contradiction.

Both, minimum phase impulse and fading random phase white noise are
unsatisfactory.
The minimum phase impulse does not sound reverberant.

The random phase noise isn't strictly flat anymore when you window
it with an exponentially decaying envelope
and also lacks a knocking impression.

I am also aware that a knocking impression comes from formants and
pronounced modes
related to shapes and material and not flat, which is another
contradiction..

I am not sure what the signal or phase alignment is I am looking for.

Also it's not a chirp cause a chirp sounds like a chirp.

What happens in a knock/impact besides pronounced modes or formants?
Somehow the phases are aligned it seems, similar to minimum phase
but then its
also random and reverberant.


Any ideas?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] idealized flat impact like sound

2016-07-29 Thread gm


I think what I am looking for would be the perfect reverb.

So that's the question reformulated: how could you construct a perfectly 
flat short reverb?


It's the same problem.


Am 29.07.2016 um 12:18 schrieb Tito Latini:

An idea is to create a kind of "ideal" residual: i.e. the transient is
a band-limited impulse and an enveloped (maybe expdec) noise is added
after two, three or a few samples. The parameters are:

 - noise env
 - delay of the noise in samples
 - transient-to-noise ratio (% of transient, % of noise)

The transient (band-limited impulse) is spectrally flat and the noise
adds the reverberation (you could start with a very low level).



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-30 Thread gm



Am 30.07.2016 um 17:23 schrieb Tito Latini:

The other FIR's are not generally
allpass with all the possible input signals.

What a rip-off!./._  that box is not a "Perfectly Flat Short Reverb".


Yes I know... though I wasn't really aware untill recently tbh... ...

I actually tried synthesizing flat FIRs and wondered why it wasn't 
allpass... ...


So, alternativly a reverb with very dense modes that's percetptuall flat,
no fluctations on a larger scale.

Just a short sequence of random numbers really exhibits large formant 
like fluctuations .


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-07-30 Thread gm


I tried this and FDNs but didn't get anything thats really noise like 
from the start.


The reverb onset is always not very ideal.

Thats why some people suggset noise like FIRs for early reflections.

I dindt try this very much though cause reverb design is very time 
consuming, used reverbs I already had

maybe I have to look into this again.

Am 30.07.2016 um 19:20 schrieb Ethan Duni:

So like a cascade of allpass filters then?

Ethan D

On Fri, Jul 29, 2016 at 11:10 AM, gm <mailto:g...@voxangelica.net>> wrote:



I think what I am looking for would be the perfect reverb.

So that's the question reformulated: how could you construct a
perfectly flat short reverb?

It's the same problem.



Am 29.07.2016 um 12:18 schrieb Tito Latini:

An idea is to create a kind of "ideal" residual: i.e. the
transient is
a band-limited impulse and an enveloped (maybe expdec) noise
is added
after two, three or a few samples. The parameters are:

 - noise env
 - delay of the noise in samples
 - transient-to-noise ratio (% of transient, % of noise)

The transient (band-limited impulse) is spectrally flat and
the noise
adds the reverberation (you could start with a very low level).


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] idealized flat impact like sound

2016-07-30 Thread gm

I think it's interesting for instance for early echoes in a reverb.
For longer sequences it seems to become very self-similar?

I only looked into the thing under "Additive recurrence", the rest is 
totally above my head, and played around with this a little bit, since 
it's basically a random number generator with "bad parameters" there.
I tried similar things before with the golden ratio, bascially even the 
same thing I just realize.

Like panning in a seemingly random fashion with golden ratio mod 1.

With little sucess in reverbs, though, for instance most of the time 
it's not a great

idea to just tune delay lengths to golden ratios...

But maybe it's useful for setting delay lengths in a different way?

Just seeing the similarity between a classical reverb algorithm and 
random number generators
with the feedback loop acting as mod operator.. didn't see it like that 
before


Did anybody build a reverb based on a random generator algorithm?
Or are reverbs just that and it just never occured to me?

What I also wonder is the difference between a sequence like that (or 
any random sequence)

and a sequence thats synthesized with FFT to be flat but with random phases.
I wonder what's better in terms of what and why, when it comes to reverb 
and/or convolution,




Am 30.07.2016 um 19:57 schrieb Patric Schmitz:

Hi,

On 07/28/2016 08:43 PM, gm wrote:

My problem was that a short segment of random isn't spectrally
straigh-line flat.

On 07/30/2016 07:22 PM, gm wrote:

Just a short sequence of random numbers really exhibits large
formant like fluctuations .

I tried following this discussion even though, admittedly, most
of it is way over my head. Still, I wonder if the problem of
short random sample sets being too non-uniformly distributed
could be alleviated somehow, by not using white noise for the
samples, but what they call a low-discrepancy quasi- or subrandom
sequence of numbers.

https://en.wikipedia.org/wiki/Low-discrepancy_sequence

I heard about them in a different context, and it seems their
main property is that they converge against the equally
distributed limit distribution much quicker than true random
samples taken from that distribution. Maybe they could be useful
here to get a spectrally more flat distribution with a fewer
number of samples?

As said, I'm by far no expert in the field and most of what has
been said is above my level of understanding, so please feel free
to discard this as utter nonsense!

Best regards,
Patric Schmitz
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-08-01 Thread gm



Am 01.08.2016 um 22:55 schrieb Evan Balster:
The most essentially flat signal is a delta function or impulse, which 
is also phase-aligned.  Apply any all-pass filter or series thereof to 
the impulse, and the fourier transform over infinite time will remain 
flat.  I recommend investigating Schroeder filters.


I already played with them as well as FDNs.
Though Shroeder allpass filters in series (or reverbs in general) are 
not strictly flat it's better than random.


And it's a trade off to have an impact like onset.
You get that "like gaussian smear" smear, unless you set your diffusion 
coefficient high
which also makes the responses longer. And the onset ist a little bit 
unnatural.

(I know you can "nest" them and change that a little bit)

Either way this way it comes down to reverb design... which quite a trap 
to waste time with...

it's never finished in a way, at least for me

And related to reverbs, the question:
- how do I create a spectrally flat short decaying noise-like and 
impact-like sequence

becomes interesting again, I think.

But maybe there is nothing that's better than Shroeder allpass?
I started to use random sequences for early reflections but found that 
these colors the sound too much, so I basically came to the same 
question. Though for reverb a more sparse noise would be better...


And for allpass delays the question remains
- how to design optimal length ratios?
That's why I made the slightly nonsense remark about RNGs and reverbs 
the other day.


So far I just use my ears, assumptions and numerology.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] idealized flat impact like sound

2016-08-02 Thread gm



Am 02.08.2016 um 10:55 schrieb Uli Brueggemann:
Maybe I miss the real question of the topic but I have played around 
with creating a FIR filter:

1. generate white noise of a desired length
2. window it with an exponentially decaying envelope
3. apply some gain, e.g. 0.5
4. add a Dirac pulse at the first sample
The result is sprectrally not flat but
5. compute the excessphase of the sum = allpass = spectrally flat and 
use it


I don't get it to work, some questions:

when I convolve the original with the excess phase signal, shouldn't I 
get a minimum phase signal again?

(I dont)

what is the expected wave shape for the excess phase signal then?
(I get an arbitry mixed phase signal, not a one sided decaying signal
- but that is also what I would have expected though -?)

or do I need the unwrapped phase to calculate the excess phase?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] minBLEP parameters: grain design and duration?

2016-08-05 Thread gm



>> On 05 Aug 2016, at 5:40 , robert bristow-johnson 
 wrote:

>>
>> []
>>
>> 5. how is this question different from the FIR brickwall LPF design 
question for polyphase interpolation?

>
> For BLIT, these sub-sample delayed grains are usually integrated to 
get a saw/square/pwm signal.


i thought that you integrated the pulse train in real time.  but i 
dunno.  that's how i imagined BLIT was done.





When you integrate your BLIT you supress the absolute level of the 
aliasing by -6dB/octave with the rest of the signal.
Though Ross was talking about BLEP where you integrate offline, then you 
have the aditional roll-off before sampling.


So it's less aliasing either way?


Anyway, @ Ross, regarding the question whether 150 sample grains is long 
or short,

it's short if it's oversampled and long if it's not oversampled.
For comparison: we had to use only 4 samples for oscillator-synch 
transitions in a project...


If you want alias-free you should also (and maybe foremost?) look into 
"sinc M" in the paper you linked (Section 3.7). Basically a slightly 
squeezed truncated sinc that fits in your period seamlessly. This makes 
your design questions obsolete but has it's drawbacks with synch etc.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Intellectual Property management in popular Digital Signal Processing

2016-08-07 Thread gm


Am 07.08.2016 um 15:33 schrieb Theo Verelst:
Some people seem to occupy themselves a bit more with obfuscating 
certain principles in (theoretical) DSP, and evil minds could 
(mis-?)construe that as attempts to steal intellectual property of others


Could you rephrase this or give an example?

You mean obfuscating in weasel words? In product descriptions or in patents?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] is our favorite mailing list still happenin'?

2016-08-24 Thread gm


the archive is here https://lists.columbia.edu/pipermail/music-dsp/

I think I signed up here 
https://lists.columbia.edu/mailman/listinfo/music-dsp


(btw, when I hit reply to your posts, your address appears in the to field
do you want these extra personal copies, or is that just by chance?
doesn't happen with all list members)



Am 24.08.2016 um 09:12 schrieb robert bristow-johnson:


Doug (who is no longer at Columbia) sorta warned us this might happen.

mailing list archive is not available.  mailing list signup page 
(lacerating gossip lids) is not to be found either.


geez, i hope Douglas can find us another home server.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread gm

Did you consider a reverb or an FFT time stretch algorithm?



Am 16.09.2016 um 17:48 schrieb Spencer Jackson:

Hi all:

First post on the list. Quite some time ago I set out to create a lv2
plugin re-creation of the electroharmonix freeze guitar effect. The
idea is that when you click the button it takes a short sample and
loops it for a drone like effect, sort of a granular synthesis
sustainer thing. (e.g. https://youtu.be/bPeeJrv9wb0?t=58)

I use an autocorrelation-like function to identify the wave period but
on looping I always have artifacts rather than a smooth sound like the
original I'm trying to emulate. I've tried some compression to get a
constant rms through the sample, tried various forms of crossfading,
tried layering several periods, and many combinations of these. I
ended up releasing it using 2 layers, compression, and a 64 sample
linear crossfade, but I've never been satisfied with the results and
have been trying more combinations. It works well on simple signals
but on something not perfectly periodic like a guitar chord it always
has the rhythmic noise of a poor loop.

I'm hoping either someone can help me find a bug in the code that's
spoiling the effect or a better approach. I've considered applying
subsample loop lengths, but I don't think that will help. The next
thing I could think of is taking the loop into the frequency domain
and removing all phase data so that it becomes a pure even function
which should loop nicely and still contain the same frequencies. I
thought I'd ask here for suggestions though, before spending too much
more time on it.

The GPL C code is here for review if anyone is curious:
https://github.com/ssj71/infamousPlugins/blob/master/src/stuck/stuck.c
I'm happy to offer more explanations of the code.

Thanks for your time.
_Spencer
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread gm



Am 16.09.2016 um 19:30 schrieb Spencer Jackson:

On Fri, Sep 16, 2016 at 11:24 AM, gm  wrote:

Did you consider a reverb or an FFT time stretch algorithm?


I haven't looked into an FFT algorithm. I'll have to read up on that,
but what do you mean with reverb? Would you feed the loop into a
reverb or apply some reverberant filter before looping?


The FFT aproach is basically a reverb in the spectral domain,
in time domain you would use an "endless" reverb without any damping filters
and a reverberation time thats as close to infinity you can get away 
with without numerical rounding issues
then you just set the reverb time and dry/wet to none (or short) while 
the effect ist off,
and to 1 when the effect is on, in-betweens ar possible, also you can 
modulate the

reverbtime with the input for a kind of ducking effekt
(you can do similar things with fft, resetting phase and amplitude when 
inpit is above a thershold)


The FFT aproach has the sonical advantage that you start with the 
original sound
otherwise its quite similar, what you get is dispersing phases with the 
original amplitude spectrum


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread gm



Am 16.09.2016 um 19:54 schrieb Evan Balster:
 If you're taking the granular approach, I suggest randomizing as much 
as possible. If you want to avoid interference between the grains, try 
to synchronize them based on a cross-correlation 
.


The reverb aproach does exactly that, without synchronisation
Your "window" is the fade in of the effect or reverberation time, and 
the fade out of the input signal

than your allpass diffusors randomize that "grain" in and over time
when you do the same with explicit grains you're just mimicking a 
reverb, with more overhead.


theres is  no need for sophisticated correlation algorithms to phase 
align the grains

which btw is the opposite of what you want in this case



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread gm

I never tried the Freeverb algorithm.
Just form inspecting the flow chart I suspect its rather colored with 
all the comb filters.


A classical reverb algorithm would be the allpass diffusors inside a 
comb filter loop like this


Input -> +-> AP -> AP -> AP -> Delay -> Feedback -> Out
 ^|
 ||

search for Datorro Effects Design for an example of this

Designing reverbs is a huge topic of its own, what you also can try is 
an FDN
(Feedback Delay Network) structure, with the delays tuned to very low 
pitches spaced in semitones

which will give an effect similar to that of open strings in a piano


Am 16.09.2016 um 22:29 schrieb Spencer Jackson:

Wow, thanks all for the replies!

If I used a reverb would it end up giving the sustained voice too much
character? I was thinking of taking freeverb and removing the comb
filters and just using the allpass filters with 100% feedback thinking
it would make it a "cleaner" sustain. I'm leaning this way because I'd
like to keep it lightweight and I think this will be less overhead
than a phase vocoder. Is that a naive approach?

Thanks again,
_Spencer



On Fri, Sep 16, 2016 at 1:59 PM, Emanuel Landeholm
 wrote:

Simple OLA will produce warbles. I recommend a phase vocoder.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-22 Thread gm


Am 22.09.2016 um 12:18 schrieb André Michelle:


How do I detect discontinuities? It is easy to see when printed 
visually but I do not see how I can approach this with code. Do I need 
the ‘complete’ function at once and check or can I do it in runtime 
for each sample. I think so since you suggest that I can jump around 
within the function without alias? Because that would sound like a 
solution I wanted to have from the very beginning.


You "detect" them they way you construct them.
For instance you have a phase ramp, say from -.5 to .5, you know that 
the discontinuity happens
when your phase + frequency_step is > 0.5, and it happens in that 
fraction of a sample
when the phase would be 0.5, so it happens at (phase + frequency_step - 
0.5)/frequency_step fraction




I do not quite get this: C(1). Does it mean I have C(n) values of the 
function where C(1) is the second value?


It's about differentiability and smoothness
"The function f is said to be of differentiability class Ck if the 
derivatives f′, f′′, ..., f(k) exist and are continuous"

See https://en.wikipedia.org/wiki/Smoothness
But you can ignore this for now...



What frequency does the integrated sync function has?


It has the same bandlimit as your waveform should have


What is a 'fraction of a sample'?
The jump in your sawtooth waveform happens within a fraction of a sample 
time, as explained above
When you read your wavetables you also read them at fractions of a 
sample (and interpolate to get the value at this fraction of a sample), 
all your signal exists also between samples.



I am missing to many aspects of your suggestion. Any hints where to 
learn about this would be appreciated.


I also have a question: what is the benefit of having a synthesizer in a 
webbrowser?


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Bandlimited morphable waveform generation

2016-09-24 Thread gm



Am 24.09.2016 um 07:29 schrieb Ross Bencina:


I'm guessing it depends on whether you have an analytic method for 
generating the minBLEP.


It's because the minblep is asymmetrical and has a lag I'd say.
That lag and asymmetry shifts the transition and introduces dc offset.
Also with the minblep you still need some look-ahead to shift it in time 
so it's not that much of an advantage.

(IIRC you still get that offset when you shift the blep)

Some people seem to use strange asymtrical bleps where the transistions 
starts immediately but I dont know what they are doing.

Maybe someone on this board (cough) can shed some light on this... ;)
Some of Native Instruments Reaktors built-in oscillators also use a 
strange blep which doesn't seem to do much





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer for pitch detection?

2017-02-07 Thread gm

can you use this for pitch detection?

convert to phase and use it's derivative for instantanous frequency?

this and a lowpass on the magnitude as has been discussed should make
a combined pitch and amplitude tracker, no?
or do you run into the same problem as had been discussed with the 
magnitude?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer for pitch detection?

2017-02-08 Thread gm
now I remember there was a paper by Miller Pucket (I believe) about 
this, dont know what it is called.
It works quite well when you lowpass the input adaptively with the 
detected pitch

and also lowpass the detected pitch.
I used ~30 Hz and SVFs for lowpassing and its ok-ish.
This makes me wonder if you can use a simpler pseudo Hilbert transform 
that only shifts phases around the

frequency range your looking at since the rest of the spectrum is discarded.
But I am not sure how to set up the allpass filters, maybe its 
sufficient to use a single allpass then.





Am 07.02.2017 um 18:31 schrieb STEFFAN DIEDRICHSEN:
You can use the phase directly as a sawtooth oscillator. It sounds 
like a weird tracking oscillator from back in the days, but it’s 
surprising, how musical the artefacts are.


Steffan

On 07.02.2017|KW6, at 17:34, robert bristow-johnson 
mailto:r...@audioimagination.com>> wrote:


using this Hilbert, analytic signal thing for frequency detection 
works only for pure sinusoids (that are amplitude-modulated and/or 
frequency-modulated).  it's really unpredictable (i may be wrong, 
someone might have math that predicts) what the instantaneous phase 
coming out of this is for a complex signal going in.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] ± 45° Hilbert transformer for pitch detection?

2017-02-08 Thread gm
here's a demo soundfile of the pitch tracker with just one adaptive 
allpass, adaptive lowpass on the pitch amd on the input.


https://soundcloud.com/magnetic_winter/adapitve-ap-pitchtrack-1-01/s-JHfNQ

Amplitude is tracked with the lowpassed  amplitude of a hilbert transform.

(File will be deleted again soon, sorry for the archive)

Do you think its useful?

The adaptive allpass doesnt work very well I aussume due to its long 
impuse response


but I  think it could be useful for punk stuff aka eurorack


Am 08.02.2017 um 13:54 schrieb gm:
now I remember there was a paper by Miller Pucket (I believe) about 
this, dont know what it is called.
It works quite well when you lowpass the input adaptively with the 
detected pitch

and also lowpass the detected pitch.
I used ~30 Hz and SVFs for lowpassing and its ok-ish.
This makes me wonder if you can use a simpler pseudo Hilbert transform 
that only shifts phases around the
frequency range your looking at since the rest of the spectrum is 
discarded.
But I am not sure how to set up the allpass filters, maybe its 
sufficient to use a single allpass then.





Am 07.02.2017 um 18:31 schrieb STEFFAN DIEDRICHSEN:
You can use the phase directly as a sawtooth oscillator. It sounds 
like a weird tracking oscillator from back in the days, but it’s 
surprising, how musical the artefacts are.


Steffan

On 07.02.2017|KW6, at 17:34, robert bristow-johnson 
mailto:r...@audioimagination.com>> wrote:


using this Hilbert, analytic signal thing for frequency detection 
works only for pure sinusoids (that are amplitude-modulated and/or 
frequency-modulated).  it's really unpredictable (i may be wrong, 
someone might have math that predicts) what the instantaneous phase 
coming out of this is for a complex signal going in.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] ± 45° Hilbert transformer using pair of IIR APFs

2017-02-09 Thread gm



Am 09.02.2017 um 14:15 schrieb Theo Verelst:
The idea of estimating a single sine wave frequency, amplitude and 
phase with a short and easy as possible filter appeals to me though.
Did you listen to the example I posted? Do you think it's useful? Or too 
many artefacts?


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer for pitch detection?

2017-02-09 Thread gm

Here is another test with more difficult input
Also works an drums, kind of

https://soundcloud.com/magnetic_winter/adaptive-ap-pitchtrack-2/s-FCoKI
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] ± 45° Hilbert transformer for pitch detection?

2017-02-10 Thread gm

This Kalman Filtering is over my head unfurtunately.

But there are also artefacts from modulating the filters, I am not sure 
if it would be worth the effort

to improve the estimate with Kalman filtering in this case.

The algorithm also finds a matching pitch on chords in same cases and it 
works surprisingly well with music sometimes,
( 
https://soundcloud.com/magnetic_winter/adaptive-ap-pitchtrack-w-poly-input/s-dlOyV 
)

so I am not sure if the jitter has the usual reasons for false estimates.
I can't think of anything time domain based that does that?
On the other hand, it's not working well (not much better) with easy and 
tamed monphonic input.


But given it's simplicity and tolerance to various inputs it might be 
useful for modular hardware.

I am not sure, I am not a modular guy myself.
And I dont know if you can do it all analog easily, case you need the 
atan 2 and phase unwrapping.


I woder if a cepstrum based pitch estimator would find the same pitches 
on polyphonic input



Am 09.02.2017 um 20:03 schrieb Evan Balster:

That jitter, eh? https://en.wikipedia.org/wiki/Kalman_filter

Your algorithm won't work for general pitch sources, become many in 
the wild will lack a prominent fundamental frequency. That said, it's 
pretty fun and some good creative mischief might be had with it.  For 
example, try multiplying a sine wave at 1/3 or 1/4 the detected pitch 
by the input signal, and then mixing some of the dry signal back in...


– Evan Balster
creator of imitone 



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] tracking drum partials

2017-08-07 Thread gm


There is also "Science of Percussion Instruments" by Rossing.


Am 07.08.2017 um 09:24 schrieb Jacob Møller Hjerrild:

Hi Thomas,
See if you can look up the book "The physics of musical instruments", 
by Fletcher and Rossing.
I can see that there is a chapter on drums in it. It might be of use 
to you!


Best regards
Jacob



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] Reverb, magic numbers and random generators

2017-09-27 Thread gm


I have this idée fixe that a reverb bears some resemblance with some 
types of random number generators especially the lagged Fibonacci generator.


Consider the simplified model reverb block


 +-> [AP Diffusor AP1] -> [AP Diffusor Ap2] -> [Delay D] ->
 |  |
 -<--


and the (lagged) fibonacci generator

xn = xn-j + xn-k (mod m)

The delay and feedback is similar to a modulus operation (wrapping) in 
that that
the signal is "folded", and creates similar kinds of patterns if you 
regard the

delay length as a period.
(convolution is called "folding" in Germand btw)

For instance, if the Delay of the allpass diffusor length is set to 0.6 
times the delay length
you will get an impulse pattern in the period that is related to the 
pattern of the operation

xn = xn-1 + 0.6 (mod 1) if you graph that on a tile.

And the quest in reverb designing is to find relationhips for the AP Delays
that result in a smooth, even and quasirandom impulse responses.
A good test is the autocorrelation function wich should ideally be an 
impulse on a uniform noise floor.


So my idea was to relate the delay time D to m and set the AP Delays to 
D*(Number/m),

where Number is the suggested numbers j and k for the fibonacci generator.

The results however were mixed, and I cant say they were better than 
setting the

times to the arbitray values I have been using before.
(Which were based on some crude assumptions about distributing the 
initial impulse as fast as possible, fine tuning per ear and rational 
coprime aproximations for voodoo).
The results were not too bad either, so they are different from random 
cause the numbers Number/m
have certain values and their values are actually somewhat similar to 
the values I was using.


Any ideas on that?
Does any of this make sense?
Suggestions?
Improvements?
How do you determin your diffusion delay times?
What would be ideal AP delay time ratios for the simplified model reverb 
above?




















___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-28 Thread gm


And here's how I've been doing it before the RNG approach, I present you:


The Go strategy of impulse spacing

If the delay loop period is 1, in a first step this places the impulses 
so that
consecutive impulses fall exactly in between already delayed impulses 
within the first periods,

by setting the ratio "a" according to

Na mod = a/2 and Na mod 1 = 1 - a/2 for N = 2,3,4...

which gives the series a = 2/(2n-1) and 2 = 4/(2n+1) :

2/3, 2/5, 2/7, 2/9... and 4/5, 4/7, 4/9, 4/11...

Note that reciprocals work in a similar way.
The first delay in this strategy can also be set to a = 1/2 which gives 
ratios of
0.5, 0.7 and 0.8, or pitch differences of -12, -7.02 and -3.86 
semitones.

We see the octave is neatly divided by this strategy.

With rational ratios like this, the pattern would repeat quickly and 
impulses would fall

exactly on delayed impulses after a few iterations.
Therefore we now carefully detune the ratios so that consecutive 
repetition cycles

do not coincide.

There are also strategies for detuning and to avoid beating and flanging
as well as certain magic numbers which fulfill this and additional criteria.

Once a satisfying couple or triplet has been found the ratios can be reused
on additional early diffusion stages, scaled by a matching strategy
like Schröders 1/3^n scaling.

Comments?








___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-28 Thread gm



Am 28.09.2017 um 17:18 schrieb Martin Lind:

To get a realistic (or a musical for matter) sounding reverb it will include 
thousands of listening tests with various test signals - I haven't seen any 
'automated' or any particular strategy for tuning reverbs in the wild other 
than extensive listening tests. The AP delay lines gets longer for each segment 
when connected in series, but I don't believe I have seen an overall strategy 
for the ratio and it's not particular important to use primes either. It's 
obvious that the output taps needs a ping pong behavior.

The reduction to 2 APS in the first post was mainly
to match the RNG structure and for a simplfied example.

I use for instance 2-3 APs in two channels with modulation and a mixing 
matrix etc
plus early diffusion stages and / or sparse FIRS outside the loop and 
all these things-


But this ratio scheme actually /is /the result of thousands of listening 
tests,

some years of reverb building attempts and lots of sneaking into
the reverbs of others...

I found the exactly same ratios +- some cents are used in a nice reverb 
from a well known company
that was built for efficiency, whos designer I know and who tweaks them 
by ear only AFAIK.


Coincidence? I think not. ;)

You still have to invest time to detune the ratios optimally
and lots of time to design your reverbs, these are just starting points.

But as I said there are strategies for that as well:

For instance you can detune 0.8 by ~ 19 cents to -1/(1-SQRT(5))
which is related to the Golden Ratio and should never repeat,
it's off enough to avoid beating or flanging
but still close enough to 4/5 to increase the echo density immediately...
And this rationale works in all sizes.

Similar numbers exist for diffusion ratios, for instance 0.618... will 
give you the flattest response possible and 0.707.. an exponetial decay 
of the impulses...



After lots of tweaking I have a reverb that works well for both, rooms 
and large spaces,

I also use this as a late stage for a very nice plate reverb for instance,
to me it's become a basic building block now.

And I found that for some lofty reverbs only 2 APs in two channels in a 
late stage are sufficient

to sustain the sound if its already decorrelated when it enters the loop,
when you have the right ratios for the AP delay lengths.

/"Don't be afraid if things because they are easy to do"/ - Brian Eno

Of course there must be optimal ratios, cause there are also shitty 
ratios that dont work from the start.
And thats why I was curious hwo the RNG approach relates to my current 
strategy


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-28 Thread gm

Now that I had to explain it I realize a few more things
It has some interesting properties not just on the echo density but also 
on the phase delays

(of course these are related somehow).
the untuned pitches are [-12] -7.02. -15.86 -21.68 ... and -3.86, -9.68, 
-14.04 ...  and inverted intervalls.


But the reziprocals of the ratios before detuning which are directly 
related to the spacing on the comb like effect of the phase delays are:


1.5, 2.5, 3.5,... and 1.25, 1.75, 2.25,...

this gives you two evenly distributed "manglings" of the phase delay 
maxima with regular maximum delay peaks on a frequency scale
(skewed by each delay, so there is an increasing delay of the whole 
range, und two series superimposed)


I wasn't aware of this before.
The question is whether that's a good thing or a bad thing?
because these are also related to the period of the loop, although this 
would change somehwat after retuning

but not much

I assume it's a good thing though, cause the alternative would be an 
arbitrary spacing of the delay maxima

with even larger gaps,
or a totally regular spacing in frequency wich results in a uniform 
delay ratio (identical pitch step) for all delays,

which is not desired either.

But it doesn't seem optimal either cause it's not regular but two series 
with larger and smaller distances of the delay maxima.


Another possibility wouldbe to have the delay maxima distributed evenly 
on a log scale, maybe.


But still the time evolution of the scheme seems unmatched, unless I'll 
find better series with the RG approach.





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #3 the lagged Fibonacci

2017-09-28 Thread gm
Now back to the orginal question, why doesn't the scheme that follows 
the lagged Fibonacci generator achieve better results then my "Go" method?


Somehow the analogy between the simplified model

 +-> [AP Diffusor AP1] -> [AP Diffusor Ap2] -> [Delay D] ->
 |  |
 -<--


and the (lagged) fibonacci generator

x[n] = x[n-j] + x[n-k] (mod m)

is flawed, they are not identical but only vaguely similar. If you see 
that at all, I am a pretty fuzzy thinker if you havent noticed yet


But still I belive that optimal j/m and k/m exist, that achieve an even 
better
distribution then the Go scheme, and work by a similar chaos mechanism 
as the RNG does.


Similar to my retuned ratio for 4/5 of -1/(1-SQRT(5)),  j/m and k/m are 
said to be related to the Golden Ratio
(but not identical, and I am not sure hwo) and are somewhat similar in 
magnitude to the ratios usefull in a reverb.


For instance 7/(2^4), 10/(2^4) gives 0,4375 and 0,625

or 1279/(2^11), 418/(2^11) give 0,62451 and 0,20410

and similar, you dont get 0.9 oder 0.1 for instance

So one idea is to find ratios that meet criteria for both schemes, for 
example.


But possibly, since the LFG is desined to give fluktuating magnitudes
and the Go method is designed to give distributed pulses both approaches 
don't match.


I am posting this mostly for inspiration, hoping that some one else will 
find interesting solutions
and insights. I am positive that some one here knows a little bit about 
chaos theorie and things like that.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #3 the lagged Fibonacci

2017-09-28 Thread gm

Another idea is to alter the Go method as follows

instead of

Na mod 1 = a/2

Na mod 1 = a*0.618... and Na mod 1 = 1- a*0.382... respectively

to get rid of the detuning procedure
a quick listening test seems promising, but I haven't investigated it in 
depth yet





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-29 Thread gm


Well maybe it is nonsense, I admit that.
The whole approach is pretty naive and thats why I was reluctant to post it.

It worked pretty well, though this might be concidence.

But if you can find great ratios manually, there must be reasons why 
they are great

and better than those you dismissed.

I haven't found these ratios in other reverbs but one, but I have 
noticed that some work
better than others - and these worked better - they diffuse faster and 
more randomly.


It's interesting that there seems to be no literature about it.
Schroeder gives 100ms/(3^n) as a guidline, and some people even suggest
to distribute the lengths randomly for FDNs.
Others suggest to use room aspect ratios.

None of that is very satisfying.

Some ratios may be "bad" but still musically interesting, for instance 
exhibit a pronounced echo after some time.

I would like to understand and control such things completely.


Am 29.09.2017 um 09:07 schrieb Martin Lind:


That’s great!

I haven’t been so fortunately in my work until now – so I have to go 
the long way with extensive tests each time. I have analyzed some 
reverbs, but didn’t found any overall rule regarding either delay 
ratios or feedback ratios – maybe I didn’t look closed enough.


*From:*music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] *On Behalf Of *gm

*Sent:* 28. september 2017 18:41
*To:* music-dsp@music.columbia.edu
*Subject:* Re: [music-dsp] Reverb, magic numbers and random generators 
#2 the Go approach


But this ratio scheme actually /is /the result of thousands of 
listening tests,

some years of reverb building attempts and lots of sneaking into
the reverbs of others...

I found the exactly same ratios +- some cents are used in a nice 
reverb from a well known company
that was built for efficiency, whos designer I know and who tweaks 
them by ear only AFAIK.


Coincidence? I think not. ;)

You still have to invest time to detune the ratios optimally
and lots of time to design your reverbs, these are just starting points.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-29 Thread gm
And, "The simplest digital reverberator is nothing more than a delay of 
30 msec."




Am 29.09.2017 um 13:16 schrieb STEFFAN DIEDRICHSEN:
Maybe that’s because of Hal Chamberlin, who wrote in his book “Musical 
Applications of Microprocessors”, 2nd ed., p. 508:


“Perhaps the simplest, yet most effective, digital signal-processing 
function is the simulation of reverberation”.


There you are. ;-)

Best,

Steffan




On 29.09.2017|KW39, at 12:47, gm <mailto:g...@voxangelica.net>> wrote:


It's interesting that there seems to be no literature about it.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #3 the lagged Fibonacci

2017-09-29 Thread gm

Am 29.09.2017 um 02:48 schrieb gm:

Another idea is to alter the Go method as follows

instead of

Na mod 1 = a/2

Na mod 1 = a*0.618... and Na mod 1 = 1- a*0.382... respectively

Some observations:

It's the same as 1/(1 + 0.382..) for N=2

This seems to do what Fibonacci does, it fills the line evenly.
This seems good for long term evolution since it's as evenly distributed 
as possible
but bad for short term evolution since it appears as some kind of order 
at first

so it's smooth in the long tail but takes some time to diffuse.
I would prefer a random distribution between pulses at the start.

Recently there where a couple of articles about distribution patterns
like those of cells in the retina, and there is a WP article about that.

But I can't remember what it was called and can't find it.

Does anybody know what I am thinking about?
Maybe that's a starting point...
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-29 Thread gm

It's a totally naive laymans approach
I hope the formatting stays in place.

The feedback delay in the loop folds the signal back
so we have periods of a comb filter.
|  |  |  |
|__|__|__|___

Now we want to fill the period densly with impulses:

First bad idea is to place a first impulse exactly in the middle

that would be a ratio for the allpass delay of 0.5 in respect to the 
comb filter.

It means that the second next impulse falls on the period.

| |
|||___


The next idea is to place the impulse so that after the second cycle
it exactly fills the free space between the first pulse and the period 
like this,

exactly in the middle between the first impulse and the period:

|   |   |
| | |  |    |
|_|_|__|__|_|___

this means we need a ratio "a" for the allpass delay in respect to the 
combfilter loop that fulfills:


2a - 1 = a/2

Where 1 is the period of the combfilter.
Alternativly, to place it on the other side, we need

2a - 1 = 1 - a/2;


|   |   |
|   |   | | |
|___|___|___|_|_|___

This gives ratios of 0.5. 0.7 and 0.8

These are bad ratios since they have very small common multiples with 
the loop period.
So we detune them slightly so they are never in synch with the loop 
period or each other.

That was my very naive approach, and surprisingly it worked.


The next idea is to place the second impulse not in the middle of the 
free space

but in a golden ratio in respect to the first impulse

|    |    |
|   |    |    |   |
|___|||__||

2a - 1 = a*0.618...

or

N*a mod 1 = a*0.618..

or if you prefer the exact solution:

a = (1 + SQRT(5)) / ( SQRT(5)*N + N - 2)

wich is ~ 0.723607  and the same as 1/ (1+ 0.382...) or 1/ (N + 0.382)

where N is the number of impulses, that means instead of placing the 2nd 
impulse on a*0.618

we can also place the 3rd, 4th etc for shorter AP diffusors.

(And again we can also fill the other side of the first impulse with 
0.839643
And the solution for N = 1 is 2.618.. and we can use the reciprocal 
0.381 to place a first impusle)


The pattern this gives for 0.72.. is both regular but evenly distributed 
so that each pulse
falls an a free space, just like on a Fibonaccy flower pattern each 
petal falls an a free space,

forever.
(I have only estimated the first few periods manually, and it appeared 
like that
Its hard to identify in the impulse response since I test a loop with 3 
APs )


The regularity is a bad thing, but the even distribution seems like a 
good thing (?).
I assume it doesn't even make a huge difference to using 0.618.. for a 
ratio though it seemed to sound better.

(And if you use 0.618, what do you use for the other APs?)

So it's not the solution I am looking for but interesting never the less.

I believe that instant and well distributed echo density is a desired 
property

and I assume that the more noise like the response is as a time series
the better it works also in the frequency/phase domain.

For instance you can make noise loops with randomizing all phases by FFT 
in circular convolution

that sound very reverberated.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-29 Thread gm



Am 29.09.2017 um 17:50 schrieb gm:
For instance you can make noise loops with randomizing all phases by 
FFT in circular convolution

that sound very reverberated.


to clarify: I ment noise loops from sample material, a kind of time 
strech, but with totally uncorrelated phases

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-09-30 Thread gm



Am 29.09.2017 um 17:50 schrieb gm:

It's a totally naive laymans approach


I found found one paper on the topic, they use a structure similar to 
the Schroeder design but with nested AP filters:


3 Parallel Combs -> 3 Nested APs -> Lowpass ->

for room reverb.
They used a genetic algorithm to optimize.

Their ratio of the second AP to the first is ~2/3 and of the third it's 
~2/7, values also given by my "Go" approach.


The detuning is in the same ballpark as well.

Explain that.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-10-01 Thread gm

Am 30.09.2017 um 22:44 schrieb Stefan Sullivan:

Sometimes the simplest approach is the best approach. Sounds like a 
good reverb paper to me. Some user evaluation and references to 
standard papers and 😁



That would be a paper on numerology then...

I generalized a bit:

Na - 1 = a*g

a = 1 / (N-g) ; which gives a = 2/3, 2/5, 2/7, 2/9... für g = 1/2
g = N - 1/a
N = 1/a + g

And for the other side:

Na - 1 = 1 - a*g

a = 2 / (N + g) ; which gives a = 4/5, 4/7, 4/9... für g = 1/2
g = 2/a - N
N = 2/a -g

N is the number of the Nth impulse and g is the time scaling
in respect to the first impulse modulo 1
and a is the ratio to the loop delay which is 1:

    D  2D
| 1 |  2    |
| | |  |  1 |
|_|_|__|__|_|_
   g___|  |
   {__|

   a__| |
   {|

Now for some more numerology, this seems to ask for something like the 
Golden Ratio,
or similar, but another value in a paper where they used genetic 
algorithms to optimize a Schroeder type reverb with nested APs one ratio is:


329 / 430 which is ~ 0.7651163 and gives a ~= 0.69309 and N=2

which is suspiciously close to ln(2)...

So I tested a familiy of numbers based on a = ln(2) and they are not bad
But what would that mean, if it means anything?

I assume it means nothing.

I also assume that there are several "best" ratio families islands and 
that their values are not other magic numbers.


Also this doesnt take tha actual impuls values into account,
nor 2nd order impulses from convolving one AP with the other(s).

I also made 2D plots for the first order patterns that emerge,
for some numbers for g it'spretty ordered while for others it seems 
rather chaotic

but hat doesn't necessarily mean a thing for the sound.











___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 the Go approach

2017-10-01 Thread gm



Am 01.10.2017 um 16:52 schrieb gm:

So I tested a familiy of numbers based on a = ln(2)


that should read g= ln(2); (a ~= 0.76597)
It seems one of the best, but why?

Counterintutively, there is no solution for g=a for N =2 (except g=a=1);
(the solution for g=a and N=3 is 1/golden ratio )
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-01 Thread gm

Am 01.10.2017 um 18:35 schrieb gm:

Counterintutively, there is no solution for g=a for N =2 (except g=a=1);
(the solution for g=a and N=3 is 1/golden ratio )

make that phi^2 = 0.382..ect


For those who didnt follow, after all this I now postulate that

*ratio = 1/ ( N - ln(2) +1) *

with N = number of the allpass delay and ratio the allpass delay length 
ratio in respect to the loop delay


gives the ideal ratios for the smoothest reverb response for allpass 
chains and allpass + delay loops for example like in the combined structure:



[APn]->...->[AP5]-->[AP4]--+-->[AP3]-->[AP2]-->[AP1]-->[Delay]--->
^   |
|   |
<

while other ratios that follow

Na mod 1 = a*g
a = 1 / (N-g)

(lower series)
or

Na mod 1 = 1- a*g
a = 2 / (N + g)

(upper series)

with N the number of the nth impulse and g the times scaling of the 
impulse in respect to the first delayed impulse


are still of interest, for instance with
g = 1/2 and a1,2,3... = a1,2,3... *detunefactor 1,2,3...
and g = 1/golden ratio squared (0.382..)
where an additions of reziprokals like a = 0.5 for the g= 1/2 series or 
a combination

lower and upper series are also possible.

Can some ome explain the result for g = ln(2) and ratio = 1/ ( N - ln(2) 
+1) to me?

Or give a better formula or value?



BTW it doesnt mean it's the "best" reverb, musically, but it seems give 
the smoothest values
For shorter reverbs other values for instance the mixed series with 
~0.5, ~2/3, ~4/5 pluse detuning migt be better.













___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-01 Thread gm

So...
Heres my "paper", a very sloppy very first draft, several figures and 
images missing and too long.


http://www.voxangelica.net/transfer/magic%20numbers%20for%20reverb%20design%203b.pdf

questions, comments, improvements, critique are very welcome.
But is it even worth to write a paper about that?, its just plain simpel:

The perfect allpass and echo comes at *1/(N+1 -ln(2)).*

Formal proof outstanding.

And if you hack & crack why it's 1/(N+1 ln(2)) exactly you'll get 76.52 
% of the fame.

Or 99.% even.

Imagine that this may lead to perfect accoustic rooms as well...
Everywhere in the world they will build rooms that bare your name, for 
millenia to come!

So, yes, participate please. ;)

I assume it has to do with fractional expansion but that paragraph is 
still missing in the paper.

I have no idea about math tbh.  but I' d love to understand that.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-01 Thread gm



Am 02.10.2017 um 00:45 schrieb gm:


Formal proof outstanding.

sorry, weird Germanism, read that as "missing" please
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-01 Thread gm

and here's the impulse response, large 4APs Early- > 3AP Loop

its pretty smooth without tweaking anything manually

https://soundcloud.com/traumlos_kalt/whd-ln2-impresponse/s-d1ArU

the autocorrelation and autoconvolution are also very good


Am 02.10.2017 um 00:45 schrieb gm:

So...
Heres my "paper", a very sloppy very first draft, several figures and 
images missing and too long.


http://www.voxangelica.net/transfer/magic%20numbers%20for%20reverb%20design%203b.pdf

questions, comments, improvements, critique are very welcome.
But is it even worth to write a paper about that?, its just plain simpel:

The perfect allpass and echo comes at *1/(N+1 -ln(2)).*

Formal proof outstanding.

And if you hack & crack why it's 1/(N+1 ln(2)) exactly you'll get 
76.52 % of the fame.

Or 99.% even.

Imagine that this may lead to perfect accoustic rooms as well...
Everywhere in the world they will build rooms that bare your name, for 
millenia to come!

So, yes, participate please. ;)

I assume it has to do with fractional expansion but that paragraph is 
still missing in the paper.

I have no idea about math tbh.  but I' d love to understand that.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-02 Thread gm

Am 02.10.2017 um 04:42 schrieb Stefan Sullivan:
Forgive me if you said this already, but did you try negative feedback 
values? I wonder what that does to the aesthetics of the reverb.


Stefan
yes... but it's not recommended for the loop unless it's part of a 
feedback matrix

you get half the modes and basically a hollow tone by that
you can use negative values an the AP coefficients as well which can 
sound quite different

- in reality every reflection is an inversion though
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Reverb, magic numbers and random generators #2 solution?

2017-10-02 Thread gm

    D  2D
| 1 |  2    |
| | |  |  1 |
|_|_|__|__|_|_
   g___|  |
   {__|

   a__| |
   {|

So, why is g= ln(2) the best solution?

So far, we haven't scaled g, the ratio of the first "broken echo" to the 
initial echo, but there is no need to keep that fixed for all allpasses/ 
echo generators.

In fact I believe that scaling g, possibly with ~0.382
will lead to families of optimal results for rooms
I have no proof for this though, but again its supported by data.

Replacing in the general formula

ratio a = 1 / (N+1-g)

with
ratio = 1/ (N+1-g^N)

Instead of g=ln(2) we use the simple original Go approach again where 
g=1/2, we set


ratio= 1/ (N+1 -1/2^(N)) or ratio= 1/ (N+1 -2^(-N))

(wich expands with Laurent series as
1/(N(1+ln(2)) + ... )

and I think it is somewhere along such lines, scaling g=1/2 with each N
on a basis 1/2^x or 2^-x where ln(2) comes into play

We now should set N, which defined both the number of echoes and the
number of the nth echo generator, independently

1/ (N+1 -(1/2)^M)

and set the ratio in respect to the ratio of the next echo generator

(N+2 -(1/2)^(M+1))/ (N+1 -(1/2)^M)

or more general

(N+2 -g^(M+1))/ (N+1 -g^M)

where N is the number of echoes and m is the number of the echo generator.


I dont have any math skills to expand on this, and I would love to see 
some one doing this.

Or see any other inside or discussion points.

Does anybody follow this?
Does any of this make sense to someone?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] minBLEP: advantages/disadvantages of ripple-after-step

2017-12-03 Thread gm
In informal listening tests I found that there is a miniscule audible 
difference
between a linear phase and minimum phase transition in a sawtooth wave 
when using headphones.


The minimum phase transistion sounded "sharper" or "harder" IIRC.

The difference was barely noticable and possibly even just imagined.
But when you create a wavetable that morphes between random phases and 
minimum phase
alignment of the harmonics there is an audible difference during the 
minimum phase transistion.


I didnt make listening tests with nonlinearities but I assume in 
practice the
difference doesn't matter much, since you usually have some lowpass on 
the sawtooth
and the linear phase and minimum phase versions are more or less the 
same in this case.



Am 03.12.2017 um 13:23 schrieb Stefan Westerfeld:

Hi!

I'm working on a minBLEP based oscillator. This means that my output signal
will contain all ripple needed for bandlimiting the output after each step.

Other methods of generating bandlimited signals, for instance interpolation in
an oversampled table will produce half of the ripple before and half of the
ripple after each step. In frequency domain, both are equally valid results,
that is, the frequencies contained in a minBLEP and those for other methods are
exactly the same, given similar filter desgin contstraints.

However, I have a vague idea that this is not all there is to know about the
quality. For instance, what happens if I apply a non linear effect such as
distortion after producing my signal. It could be that one of the two ways of
introducing ripple in the oscillator would produce better results when combined
with such an effect.

I already know that the minBLEP provides a faster reaction for input from
outside, for instance resetting the phase will be quicker if all ripple is
introduced afterwards, but for my application that doesn't matter much. But if
the time signal of one of the two methods would produce better results when
using effects, this would matter to me. So are there advantages/disadvantages
for each method?

Cu... Stefan


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Finding discontinuity in a sine wave.

2018-01-10 Thread gm

Isn't a clock drift indistinguishable from a drift in your input signal?


I'd use a feed forward combfilter btw


Am 10.01.2018 um 18:47 schrieb Benny Alexandar:
This all works well in an ideal system. Suppose the sampling clock is 
drifting slowly over period of time,
then the notch filter will fail to filter it. How to detect and 
correct these clock drifts and have a stable notch filter.


-ben


*From:* music-dsp-boun...@music.columbia.edu 
 on behalf of Ethan Fenn 


*Sent:* Wednesday, January 10, 2018 10:33 PM
*To:* music-dsp@music.columbia.edu
*Subject:* Re: [music-dsp] Finding discontinuity in a sine wave.
If the sine frequency is f and the sample rate is sr:

Let C = cos(2*pi*f/sr)

For each sample compute:

y(t) = x(t) - 2*C*x(t-1) + x(t-2)

y(t) should be 0 for every t... if not it indicates a discontinuity. 
This is just an FIR filter with a zero at the given frequency.


-Ethan




On Wed, Jan 10, 2018 at 11:58 AM, STEFFAN DIEDRICHSEN 
mailto:sdiedrich...@me.com>> wrote:


With any phase discontinuity, a spectral discontinuity is
delivered for free. So, the notch filter will have an output, a
PPL would need to re-sync, etc.

Steffan



On 10.01.2018|KW2, at 17:51, Benny Alexandar
mailto:ben.a...@outlook.com>> wrote:

But if there is a phase discontinuity it will be hard to detect.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu 
https://lists.columbia.edu/mailman/listinfo/music-dsp





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Clock drift and compensation

2018-01-27 Thread gm


I don't understand your project at all so not sure if this is helpful, 
probably not,
but you can calculate the drift or instantanous frequency of a sine wave 
on a per sample basis

using a Hilbert transform
HT -> Atan2 -> differenciate -> unwrap
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Clock drift and compensation

2018-01-28 Thread gm

diff gives you the phase step per sample,
basically the frequency.

However the phase will jump back to zero periodically when the phase 
exceeds 360°

(when it wraps around) in this case diff will get you a wrong result.

So you need to "unwrap" the phase or the phase difference, for example:


diff = phase_new - phase_old
if phase_old > Pi and phase_new < Pi then diff += 2Pi

or similar.


Am 28.01.2018 um 17:19 schrieb Benny Alexandar:

Hi GM,

>> HT -> Atan2 -> differenciate -> unwrap
Could you please explain how to find the drift using HT,

HT -> gives real(I) & imaginary (Q) components of real signal
Atan2 -> the phase of an I Q signal
diff-> gives what ?
unwrap ?

-ben



*From:* music-dsp-boun...@music.columbia.edu 
 on behalf of gm 


*Sent:* Saturday, January 27, 2018 5:20 PM
*To:* music-dsp@music.columbia.edu
*Subject:* Re: [music-dsp] Clock drift and compensation

I don't understand your project at all so not sure if this is helpful,
probably not,
but you can calculate the drift or instantanous frequency of a sine wave
on a per sample basis
using a Hilbert transform
HT -> Atan2 -> differenciate -> unwrap
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Elliptic filters coefficients

2018-02-03 Thread gm


If your goal is to isolate the lowest partial, why dont you use the 
measured frequency to steer a lowpass or lowpass/bandpass filter?


For my time domain estimator I use

4th order Lowpass, 2nd order BP -> HilbertTransform -> Phasedifferenz -> 
Frequency

 |cutoff|


This gives you an estimate (or rather measurement) per sample.

A further improvement is to calculate the sub-sample time between phase 
wraparounds,
this basically elimates any spurios modulations from the phase within a 
cycle,

simlar to an integrator.

You can have several measurements per cycle again by adding angle 
offsets to the phase and calculating the time between wraprounds of the 
new angles as well.


I use SVFs for the filters with Q set to a Butterworth response for the LPs
and Q=2 for the Bandpass.

I dont know if this method has less overhead than yor method
because you need the Hilbert Transform, but the prefiltering is more 
efficient


Depending on your input sorces you can try to exchange the HT with a 
single allpass with adaptive corner frequency



Am 03.02.2018 um 14:49 schrieb Dario Sanfilippo:

Thanks for the links, Steven!

Vadim, what is the title of your book? We may have it here at uni.

Hi, Robert. I'm working on some time-domain feature-extraction 
algorithms based on adaptive mechanisms. A couple of years ago, I 
implemented a spectral tendency estimator where the cutoff of a 
crossover (1p1z filters) is piloted by the RMS imbalance of the two 
spectra coming out of the same crossover. Essentially, a negative 
feedback loop for the imbalance pushes the cutoff towards the 
predominant spectrum until there's a "dynamical equilibrium" point 
which is the estimated tendency.


A recent extension to that algorithm was to add a lowpass filter 
within the loop, at the top of the chain, as shown in this diagram: 
https://www.dropbox.com/s/a1dtk0ri64acssc/lowest%20partial.jpg?dl=0. 
(Some parts necessary to avoid the algorithm from entering attractors 
have been omitted.)


If the same spectral imbalance also pilots the cutoff of the lowpass 
filter, we have a nested positive (the lowpass strengthens the 
imbalance which pushes the cutoff towards the same direction) and 
negative (the crossover's dynamical equilibrium point) feedback loop. 
So it is a recursive function which removes partials from top to 
bottom until there is nothing left to remove except the lowest partial 
in the spectrum.


The order and type of the lowpass (I've tried 1p1z ones, cascading up 
to four of them), I believe, is what determines the SNR in the system, 
so what the minimum amplitude of the bottom partial should be to be 
considered signal or not. Large transition bands in the lowpass will 
affect the result as the top partials which are not fully attenuated 
will affect the equilibrium point. Since elliptic filters have narrow 
transition bands at low orders, I thought that they could have given 
more accurate results, although the ripples in the passing band would 
also affect the SNR of the system.


Perhaps using Butterworth filters could be best as the flat passing 
band could make it easier to model a "threshold/sensitivity" 
parameter. With that regard, I should also have a look at fractional 
order filters. (I've quickly tried by linearly interpolating between 
filters of different orders but I doubt that that's the precise way to 
go.)


Of course, an FFT algorithm would perhaps be easier to model, though 
this time-domain one should be CPU-less-expensive, not limited to the 
bin resolution, and would provide a continuous estimation not limited 
to the FFT period.


I've tested the algorithm and it seems to have a convincing behaviour 
for most test signals, though it is not too accurate in some specific 
cases.


Any comment on how to possibly improve that is welcome.

Thanks,
Dario


Dario Sanfilippo - Research, Teaching and Performance
Reid School of Music, Edinburgh University
+447492094358
http://twitter.com/dariosanfilippo
http://dariosanfilippo.tumblr.com

On 3 February 2018 at 08:01, robert bristow-johnson 
mailto:r...@audioimagination.com>> wrote:


i'm sorta curious as to what a musical application is for
elliptical filters that cannot be better done with butterworth or,
perhaps, type 2 tchebyshev filters?  the latter two are a bit
easier to derive closed-form solutions for the coefficients.

whatever.  (but i am curious.)

--

r b-j r...@audioimagination.com 

"Imagination is more important than knowledge."

 Original Message

Subject: Re: [music-dsp] Elliptic filters coefficients
From: "Dario Sanfilippo" mailto:sanfilippo.da...@gmail.com>>
Date: Fri, February 2, 2018 6:37 am
To: music-dsp@music.columbia.edu 
---

Re: [music-dsp] Elliptic filters coefficients

2018-02-04 Thread gm

I don't have a paper about this
and I don't see how you could get the SNR from it.

For frequency detection

(prefilter) -> HilbertTransform -> atan2  gives you the phase

differenciate and unwrap gives you the angular frequency:

diff = phase_new - phase_old
if phase_old > Pi and phase_new < Pi then diff += 2Pi

and F Hz = diff * SR

This estimate is not perfect and has some modulation since the filter is 
not perfect and so
the Hilbert Transform includes more than the lowest partial and atan2 is 
the sum of all  (I think).


A better estimate is to only measure the time between wraparounds of the 
phase:


The sub-sample time of the wraparound (the time that is passed when its 
detected) is

(phase_value_after_wrap_round / angular_frequency ) * 1/SR

I basically start a counter with 1/SR at every phase wrap around and
subtract the sub-sample time, and 1/T for F when the cycle is completed.
So this gives you an estimate once per cycle, which is more or less in synch
with the original waveform.

If you need more than one estimate per cycle you can add an offset to 
the phase,
and wrap it, and run counters in parallel, or use the angular frequency 
per sample

directly.

I just realized I also have a static prefilter to precondition the 
signal and a minimum frequency for the dynamic filters

if I remember correctly this was to avoid that the system tends to
fall down to dc when there is no signal - your idea to add some noise 
and to change the Q

of the bandpass might alos improve things




Am 04.02.2018 um 01:45 schrieb Dario Sanfilippo:

Hi, GM.

On 3 February 2018 at 18:39, gm <mailto:g...@voxangelica.net>> wrote:






If your goal is to isolate the lowest partial, why dont you use
the measured frequency to steer a lowpass or lowpass/bandpass filter?

​I'm already piloting a lowpass in my current algorithm but I was 
thinking to use a bandpass too, also for another algorithm which 
detects the loudest partial. I haven't properly thought of that but a 
first idea was to have the Q of the BP inversely proportional to the 
derivative of the output of the integrator: that way, when the 
integrator is not changing we are around the detected partial and we 
can then select even further by increasing the Q. Though, I should 
also think of a way to "release" the Q when the input signal changes 
or the system might enter an attractor.


Currently, in the algorithm that I described earlier, there is a very 
tiny amount of energy added to the high spectrum of the crossover; 
tiny enough to be negligible for the measurement, but big enough to 
move the cutoff up when there is no energy in the crossover's input. 
So if I have a detected lowest at 1k plus a bunch of partials at 10k 
and higher, if I remover the 1k partial the system will move to 10k.



For my time domain estimator I use

4th order Lowpass, 2nd order BP -> HilbertTransform ->
Phasedifferenz -> Frequency
 
|cutoff|

​I'm not familiar with the technique which uses the HT and the phase 
difference to calculate the frequency. I'd be very grateful if you 
could say a few words about that.​ Can you also control the 
SNR/sensitivy in your system?



This gives you an estimate (or rather measurement) per sample.

A further improvement is to calculate the sub-sample time between
phase wraparounds,
this basically elimates any spurios modulations from the phase
within a cycle,
simlar to an integrator.

You can have several measurements per cycle again by adding angle
offsets to the phase and calculating the time between wraprounds
of the new angles as well.

I use SVFs for the filters with Q set to a Butterworth response
for the LPs
and Q=2 for the Bandpass.

I dont know if this method has less overhead than yor method
because you need the Hilbert Transform, but the prefiltering is
more efficient

Depending on your input sorces you can try to exchange the HT with
a single allpass with adaptive corner frequency

​
All great ideas, thanks a lot for sharing. Have you published any 
paper on this or other time-domain algorithms for feature-extration? 
The reason why I implemented that algorithm is that it will probably 
be used in a time-domain noisiness estimator which I'm working on and 
that I will perhaps share here if it gets somewhere.


Cheers,
Dario


Am 03.02.2018 um 14:49 schrieb Dario Sanfilippo:

Thanks for the links, Steven!

Vadim, what is the title of your book? We may have it here at uni.

Hi, Robert. I'm working on some time-domain feature-extraction
algorithms based on adaptive mechanisms. A couple of years ago, I
implemented a spectral tendency estimator where the cutoff of a
crossover (1p1z filters) is piloted by the RMS imbalance of the
two spectra coming out of the s

Re: [music-dsp] Clock drift and compensation

2018-03-09 Thread gm
The problem I see is that your sine wave needs to have a precise 
amplitude for the arcsine.

I don't understand your application so I don't know if this is the case.


Am 09.03.2018 um 19:58 schrieb Benny Alexandar:

Hi GM,
Instead of finding Hilbert transform, I tried with just finding the 
angle between samples

of a fixed frequency sine wave.
I tried to create a sine wave of  frequency x[n] = sin ( 2 * pi * 1/4 
* n), and tried calculating the angle between samples,
it should be 90 degree. This also can be used to detect any 
discontinuity in the signal.

Below is the octave code which I tried.

One cycle of sine wave consists of 4 samples, two +ve and two -ve.

% generate the sine wave of frequency 1/4
for i = 1 : 20
   x(i) = sin( 2 * pi * ( 1 / 4) * i);
end

% find the angle between samples in degrees.
 for i = 1:20
    ang(i)  =  asin( x(i) ) * (180 / pi);
 end

% find the absolute difference between angles
for i = 1:20
 diff(i) =  abs( ang( i + 1 ) - ang( i ));
end

% check for discontinuity
for i = 1:20
if (diff(i) != 90)
  disp("discontinuity")
endif
end


Please verify this logic is correct for discontinuity check.

-ben




*From:* music-dsp-boun...@music.columbia.edu 
 on behalf of gm 


*Sent:* Monday, January 29, 2018 1:29 AM
*To:* music-dsp@music.columbia.edu
*Subject:* Re: [music-dsp] Clock drift and compensation

diff gives you the phase step per sample,
basically the frequency.

However the phase will jump back to zero periodically when the phase 
exceeds 360°

(when it wraps around) in this case diff will get you a wrong result.

So you need to "unwrap" the phase or the phase difference, for example:


diff = phase_new - phase_old
if phase_old > Pi and phase_new < Pi then diff += 2Pi

or similar.


Am 28.01.2018 um 17:19 schrieb Benny Alexandar:

Hi GM,

>> HT -> Atan2 -> differenciate -> unwrap
Could you please explain how to find the drift using HT,

HT -> gives real(I) & imaginary (Q) components of real signal
Atan2 -> the phase of an I Q signal
diff-> gives what ?
unwrap ?

-ben



*From:* music-dsp-boun...@music.columbia.edu 
<mailto:music-dsp-boun...@music.columbia.edu> 
 
<mailto:music-dsp-boun...@music.columbia.edu> on behalf of gm 
 <mailto:g...@voxangelica.net>

*Sent:* Saturday, January 27, 2018 5:20 PM
*To:* music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
*Subject:* Re: [music-dsp] Clock drift and compensation

I don't understand your project at all so not sure if this is helpful,
probably not,
but you can calculate the drift or instantanous frequency of a sine wave
on a per sample basis
using a Hilbert transform
HT -> Atan2 -> differenciate -> unwrap
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Wavetable File Formats?

2018-03-14 Thread gm

Some years ago I tried to make a "stretched partials" sawtooth this way
and found that the tables get prohibitively large
since you are restricted to common devisors or integer multiples for the 
"spin cycles"

and phase steps of the partials.

The second lowest partial needs to make at least one spin cycle, and
all higher partials need to make an integer multiple of that
which also makes their detuning relationships unnatural compared to
for instance a piano tone.





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Wavetable File Formats?

2018-03-14 Thread gm


Another disadvantage was that you get a noticable chirp transient when 
the phases

realign after one complete cycle of the wavetable.
You don't have this in a piano since the phases never realign again 
after the initial strike

so you have the transient only at the onset of the note.


Am 14.03.2018 um 11:39 schrieb gm:

Some years ago I tried to make a "stretched partials" sawtooth this way
and found that the tables get prohibitively large
since you are restricted to common devisors or integer multiples for 
the "spin cycles"

and phase steps of the partials.

The second lowest partial needs to make at least one spin cycle, and
all higher partials need to make an integer multiple of that
which also makes their detuning relationships unnatural compared to
for instance a piano tone.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Wavetable File Formats?

2018-03-14 Thread gm



Am 14.03.2018 um 12:00 schrieb robert bristow-johnson:



> Some years ago I tried to make a "stretched partials" sawtooth this way
> and found that the tables get prohibitively large

the *number* of wavetables gets large, right?  is that what you mean?



yes, bad wording


it doesn't have anything to do with the size of the wavetable.

with gigabytes of memory and 64-bit addressing space, i am not sure 
what is "prohibitive".  a regular sampled note can easily take up 1/2 
meg.  how many 2048-point wavetables can you fit into that space? or 
in a meg?  or 4 meg?




true, but personally I am not a fan of synths that take up much memory 
and that was

one of the reasons it wasn't implemented in the final product


> which also makes their detuning relationships unnatural compared to
> for instance a piano tone.

more unnatural than flat?



Thats a matter of taste in the end, at least I was a little bit 
dissapointed with the results of my experiments



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Wavetable File Formats?

2018-03-14 Thread gm

Good idea with the random phase

We did pseudo PWM with two identical arbitrary waves, one inverted, but 
not what you describe with random phase




Am 14.03.2018 um 13:06 schrieb Frank Sheeran:

> Another disadvantage was that you get a noticable chirp transient when
> the phases realign after one complete cycle of the wavetable.

Just put them in the buffer with random phases and they'll never 
re-align.  That's not what a piano does of course, but might be 
servicable.


BTW, my synth does PW/PWM with two out-of-phase sawtooths, one 
negative.  When I make the harmonics of one of the sawtooths random, 
you get something that sounds like PWM except it never comes to a 
"peak", it just grinds away.


I haven't heard that on any other synth but would love to know if 
anyone knows prior art.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] parametric string synthesis

2018-03-14 Thread gm

I made a little demo for parametric string synthesis I am working on:

https://soundcloud.com/traumlos_kalt/parametric-strings-test/s-VeiPk

It's a morphing oscillator made from basic "virtual analog" oscillator 
components
(with oscillator synch) to mimic the bow & string "Helmholtz" waveform, 
fed into a simplified body filter.


The body is from a cello and morphed in size for viola, cello and bass 
timbres

(I know that's not accurate).
It's made from a very sparse stereo FIR filter (32 taps).
It doesn't sound like the real instrument body response, but the effect 
still sounds somewhat physcial to me.


The idea is to replace the VA "Helmholtz" oscillator with a wavetable 
oscillator (with synch?)
which is controlled by paramterized playing styles, to be more flexible 
and more natural behaving

than sample libraries.
And a better body filter.

The advantage over waveguide modeling with a bow model would be that you 
don't have to
play the bow with accurate pressure and velocity, and that it is more 
cpu friendly

and more flexible in regards to more artificial timbres and timbre morphing.

So far it's a private hobby project in Reaktor 5, but it maybe has some 
potential I believe.
Doesn't sound like samples yet but maybe it will when the model is 
improved...


At least it can provide an instrument with a hybrid sound between 
virtual analog and physical
which is something I love to use in my music. I used the body filter 
with synths quite often.


So far the "Helmholtz" waveform is made from assumptions like that that 
it behaves like
a synched oscillator depending on the ratio between the two sides of the 
string,

which might not be true.

Why I am posting this:
Maybe someone here plays an electric solid body violin or something 
similar and can provide
samples of bow & string waveforms with different playing styles and 
notes for analysis?

And has an interest to join efforts to create this instrument?
Or maybe someone even knows of a source for such waveforms?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] parametric string synthesis

2018-03-14 Thread gm
There actually is some randomness and noise in the onset, as well as 
some subharmonic amplitude modulation.


The onset doesn't sound very convincing though, thats why I would like 
to use wavetables and pitch data from samples.


In the end I would like be able to emphasize and exaggerate such effects 
that make a string sound sound natural...



Am 14.03.2018 um 17:07 schrieb Esteban Maestre:

Nice demos!

In

http://ieeexplore.ieee.org/document/7849104/

we point to a multi-modal string quartet multi-modal (audio, contact 
mics, mocap, video, etc)
dataset we recorded some time ago. I believe it's also listed in the 
MTG-UPF website.


As for your excitation signal, perhaps some temporary "chaos" in your
oscillator synchronization method might help with the attacks.

Cheers,

Esteban



On 3/14/2018 1:45 PM, gm wrote:

I made a little demo for parametric string synthesis I am working on:

https://soundcloud.com/traumlos_kalt/parametric-strings-test/s-VeiPk

It's a morphing oscillator made from basic "virtual analog" 
oscillator components
(with oscillator synch) to mimic the bow & string "Helmholtz" 
waveform, fed into a simplified body filter.


The body is from a cello and morphed in size for viola, cello and 
bass timbres

(I know that's not accurate).
It's made from a very sparse stereo FIR filter (32 taps).
It doesn't sound like the real instrument body response, but the 
effect still sounds somewhat physcial to me.


The idea is to replace the VA "Helmholtz" oscillator with a wavetable 
oscillator (with synch?)
which is controlled by paramterized playing styles, to be more 
flexible and more natural behaving

than sample libraries.
And a better body filter.

The advantage over waveguide modeling with a bow model would be that 
you don't have to
play the bow with accurate pressure and velocity, and that it is more 
cpu friendly
and more flexible in regards to more artificial timbres and timbre 
morphing.


So far it's a private hobby project in Reaktor 5, but it maybe has 
some potential I believe.
Doesn't sound like samples yet but maybe it will when the model is 
improved...


At least it can provide an instrument with a hybrid sound between 
virtual analog and physical
which is something I love to use in my music. I used the body filter 
with synths quite often.


So far the "Helmholtz" waveform is made from assumptions like that 
that it behaves like
a synched oscillator depending on the ratio between the two sides of 
the string,

which might not be true.

Why I am posting this:
Maybe someone here plays an electric solid body violin or something 
similar and can provide
samples of bow & string waveforms with different playing styles and 
notes for analysis?

And has an interest to join efforts to create this instrument?
Or maybe someone even knows of a source for such waveforms?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp





___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] bandsplitting strategies (frequencies) ?

2018-03-23 Thread gm

What are good frequencies for band splits? (2-5 bands)

What I am doing is divide the range between 100 Hz 5-10 kHz
into equal bands on a log scale (log2 or pitch).

Are there better strategies?
Or better min/max frequencies?
How is it usually done?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] bandsplitting strategies (frequencies) ?

2018-03-23 Thread gm


The purpose is multiband compression and distortion.

So I only have a few bands, 2 to 5.

I use ERB scale in my vocoder, which worked slightly better than Bark 
scale for me (it seems better defined at the low range)


I was wondering if I should use it here too or if it's better on a log2 
scale.


Also I cant decide what upper and lower frequency I should use when I 
divide evenly on a log scale.


I chose 100 Hz cause thats the lowest Bark band I think.


Am 23.03.2018 um 14:39 schrieb Matt Jackson:

Gabriel,

I think it depends on what you are trying to do. What’s your context?

For example a Vocoder (for voice) might have a different distribution of bands 
(bark scale) than a multipurpose graphic EQ (even octaves).
One strange example I know of is the Serge resonant EQ (not crossovers but 
fixed frequency resonant peaks) has deliberately picked frequencies that, 
“except for the top and bottom frequency bands, the bands are spaced at an 
interval of a major seventh. The Resonant Equalizer is designed to produce 
formant peaks and valleys similar to those in acoustic instruments.”

Matt


On 23. Mar 2018, at 13:05, robert bristow-johnson  
wrote:

On 3/23/18 12:01 AM, gm wrote:

What are good frequencies for band splits? (2-5 bands)

What I am doing is divide the range between 100 Hz 5-10 kHz
into equal bands on a log scale (log2 or pitch).

Are there better strategies?
Or better min/max frequencies?
How is it usually done?

conventionally, a graphic EQ might be split into bands with log center 
frequencies every octave, for a 10 band, or every 1/3 octave for a 31 band EQ.

i think the 10-octave frequencies might be at

25, 50, 100, 200, 400, 800, 1600, 3200, 6400, 12800 Hz

with the bandedges at the geometric mean of adjacent pair of frequencies

but they might put them conventionally at

20, 50, 100, 200, 500, 1000, 2000, 5000, 1, 2 Hz

you can see there's a bigger-than-octave gap between 200 and 500.

maybe the 31-band 1/3 octave frequencies might conventionally be at

20, 25, 32, 40, 50, 63, 80, 100, 125, 160, 200, 250, 320, 400, 500, 630, 800, 
1000, 1250, 1600, 2000, 2500, 3200, 4000, 5000, 6300, 8000, 1, 12500, 
16000, 2 Hz

those are conventional frequencies. not all spacing are exactly 1/3 octave.  
you can see that 630 is a compromise between twice 320 and half of 1250.  you 
might want your bands split precisely in 1/3 octaves spaced apart by a 
frequency ratio of 2^(1/3) which is about 1.26.  that might have bands labeled:

20, 25, 32, 40, 50, 63, 80, 100, 126, 159, 200, 252, 318, 400, 504, 635, 800, 
1007, 1271, 1600, 2014, 2542, 3200, 4028, 5084, 6400, 8056, 10168, 12800, 
16112, 20336 Hz


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] bandsplitting strategies (frequencies) ?

2018-03-23 Thread gm
For equally spaced bands you could do it with 2 parameters, one to shift 
the middle or base frequency

and one spread or fan parameter that spreads or narrows the bands.

The reason I don't want this, is that I don't want too many parameters
and the user doesn't know how to set the bands either, especially since
the difference is probably not obvious, sonically.

But it's an option I am considering.



Am 23.03.2018 um 16:50 schrieb Matt Jackson:

If it’s a distortion or compression and only 2-4 bands, a user set crossover(s) 
would usually be desirable.
The Ableton Multi-band Dynamics, Waves C4, Ohm Force Ohmacide, Izotope plugins, 
Surreal Machines Transient Machines all come to mind.
It probably depends on the complexity you are looking for but some presets for 
“voice”, "full mix”, “drums” etc. usually go a long way.


On 23. Mar 2018, at 15:05, gm  wrote:


The purpose is multiband compression and distortion.

So I only have a few bands, 2 to 5.

I use ERB scale in my vocoder, which worked slightly better than Bark scale for 
me (it seems better defined at the low range)

I was wondering if I should use it here too or if it's better on a log2 scale.

Also I cant decide what upper and lower frequency I should use when I divide 
evenly on a log scale.

I chose 100 Hz cause thats the lowest Bark band I think.


Am 23.03.2018 um 14:39 schrieb Matt Jackson:

Gabriel,

I think it depends on what you are trying to do. What’s your context?

For example a Vocoder (for voice) might have a different distribution of bands 
(bark scale) than a multipurpose graphic EQ (even octaves).
One strange example I know of is the Serge resonant EQ (not crossovers but 
fixed frequency resonant peaks) has deliberately picked frequencies that, 
“except for the top and bottom frequency bands, the bands are spaced at an 
interval of a major seventh. The Resonant Equalizer is designed to produce 
formant peaks and valleys similar to those in acoustic instruments.”

Matt


On 23. Mar 2018, at 13:05, robert bristow-johnson  
wrote:

On 3/23/18 12:01 AM, gm wrote:

What are good frequencies for band splits? (2-5 bands)

What I am doing is divide the range between 100 Hz 5-10 kHz
into equal bands on a log scale (log2 or pitch).

Are there better strategies?
Or better min/max frequencies?
How is it usually done?

conventionally, a graphic EQ might be split into bands with log center 
frequencies every octave, for a 10 band, or every 1/3 octave for a 31 band EQ.

i think the 10-octave frequencies might be at

25, 50, 100, 200, 400, 800, 1600, 3200, 6400, 12800 Hz

with the bandedges at the geometric mean of adjacent pair of frequencies

but they might put them conventionally at

20, 50, 100, 200, 500, 1000, 2000, 5000, 1, 2 Hz

you can see there's a bigger-than-octave gap between 200 and 500.

maybe the 31-band 1/3 octave frequencies might conventionally be at

20, 25, 32, 40, 50, 63, 80, 100, 125, 160, 200, 250, 320, 400, 500, 630, 800, 
1000, 1250, 1600, 2000, 2500, 3200, 4000, 5000, 6300, 8000, 1, 12500, 
16000, 2 Hz

those are conventional frequencies. not all spacing are exactly 1/3 octave.  
you can see that 630 is a compromise between twice 320 and half of 1250.  you 
might want your bands split precisely in 1/3 octaves spaced apart by a 
frequency ratio of 2^(1/3) which is about 1.26.  that might have bands labeled:

20, 25, 32, 40, 50, 63, 80, 100, 126, 159, 200, 252, 318, 400, 504, 635, 800, 
1007, 1271, 1600, 2014, 2542, 3200, 4028, 5084, 6400, 8056, 10168, 12800, 
16112, 20336 Hz


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] bandsplitting strategies (frequencies) ?

2018-03-27 Thread gm


i keep dividing into equal bands on a log2 scale,

I believe thats equal energy on a -6dB/octave spectrum and gives figures 
very close


to what David Reaves suggested the other day for 4 bands when you set 
6300 Hz as the upper limit


and 150 Hz corner frequency for the bass band (or 45 Hz for the lower limit)



Am 27.03.2018 um 11:36 schrieb Theo Verelst:

gm wrote:

What are good frequencies for band splits? (2-5 bands)


For standard mastering applications there are norms for binoral and 
Equal Loudness Curve related reasons. The well known PC software 
probably doesn't get there but it may be you want to tune those 
frequencies based on the following criteria:


  - type of filter (FIR/IIR/FFT, resonant or not, congruent with 
standard linear
    (analogue) filter constructions or not) and the associated impulse 
response length

  - the properties of the filter impulse and combinations during standard
    signal reconstruction (at the DAC) or up/down sampling
  - energy distribution for white/pink noise or standard signals for your
    specific application
  - the function of the application in terms of being somewhere on the 
line of
    a High Fidelity slight clipping prevention, over a radio mastering 
with significant
    compression, or a wild tool where the use is as a very significant 
signal alteration

    tool


Theo V.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] bandsplitting strategies (frequencies) ?

2018-03-27 Thread gm


This actually explains a few misconceptions I had in the past..
Both slopes are filed under "natural spectrum" in my mind.


Am 27.03.2018 um 19:16 schrieb robert bristow-johnson:>


> I believe thats equal energy on a -6dB/octave spectrum and gives figures
> very close

no, that's -3 dB/oct.

pink noise is equal energy per octave and is -3 dB drop every octave.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] bandsplitting strategies (frequencies) ?

2018-03-27 Thread gm



Am 27.03.2018 um 19:29 schrieb David Reaves:

If what you do involves material with an unusual spectral balance, and/or if 
you use aggressive filter roll offs and/or you use something other than RMS 
detection, then my assumptions may not be useful.



that is understood.
there are not many assumptions I can make so I think pink noise is the 
best match


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] (Novel?) "Modal Phase Rotation Synthesis"

2018-04-02 Thread gm


I don't know if this idea is new, I had it for some time but have never 
seen it mentioned anywhere:


Use a filter with high q and rotate it's (complex) output by the (real) 
output

of another filter to obtain a phase modulated sine wave.
Excite with an impulse or impact signal.

It's basically crossed between modal and phase modulation synthesis.

Now there are some ideas to this to make it practical and a useful 
substitute for phase modulation and FM:


You can use a state variable filter with an additional allpass instead of
the complex filter to obtain a filter you can pitch modulate in audio
(useful for drum synthesis ect) (or maybe the 90 shift can be designed 
more efficiently

into the SVF IDK.)

Instead of expensive trig calculations for the rotation, or using
the normalized complex signal form the other filter (also expensive)
just use a very coarse parabolic sine/cosine approximation and the real 
signal,
the difference is really very small sonically, since the modulator is 
still sine
and the radius stays around 1 so it's the effect of a small amplitude 
modulation on the modulator

caused by the slight deviation of the circle.
I couldnt tell the difference when I tested it first.

You need 7 mults and 4 adds in addition to the SVF for the AP and 
rotation per carrier.


But you save an envelope for each operator and have a pretty efficient 
sine operator with the SVF.
And you get all the benfits of phase modulation over frequency 
modulation of the

filter cutoff.
It's very useful for drum synthesis but also useful for some other 
percussive sounds like "FM" pianos etc.


Here is an audio demo, with cheap "soundboard" and some other fx added:
https://soundcloud.com/traumlos_kalt/smoke-piano-test-1-01/s-W54wz

I wonder if this idea is new?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] (Novel?) "Modal Phase Rotation Synthesis"

2018-04-03 Thread gm


Yes it's related, I dont recall if I used one of these filters
in my first implementation which was several years ago.
I used a complex filter before I used the SVF and AP.

But I think you can't do full phase modulation with such filters?
I think that was my motivation to apply the rotation outside of the filter.

Either way it seems lighter on cpu when you use the external rotation with
parabolas instead of trig operations since you dont have to constantly
adapt the internal state of the filter.

A drawback of the method in general with either filter is that
you can cancel the internal state with an impulse.

I havent figured out what the best excitation signal is.

The paper you linked suggests to delay the impulse until a zero crossing
but that is not an option in my use cases.


Am 03.04.2018 um 01:46 schrieb Corey K:
Your idea seems to bear a few similarities to this (just in case you 
haven't seen it already):
https://ccrma.stanford.edu/~jos/smac03maxjos/ 
<https://ccrma.stanford.edu/%7Ejos/smac03maxjos/>




On Mon, Apr 2, 2018 at 2:46 PM, gm <mailto:g...@voxangelica.net>> wrote:



I don't know if this idea is new, I had it for some time but have
never seen it mentioned anywhere:

Use a filter with high q and rotate it's (complex) output by the
(real) output
of another filter to obtain a phase modulated sine wave.
Excite with an impulse or impact signal.

It's basically crossed between modal and phase modulation synthesis.

Now there are some ideas to this to make it practical and a useful
substitute for phase modulation and FM:

You can use a state variable filter with an additional allpass
instead of
the complex filter to obtain a filter you can pitch modulate in audio
(useful for drum synthesis ect) (or maybe the 90 shift can be
designed more efficiently
into the SVF IDK.)

Instead of expensive trig calculations for the rotation, or using
the normalized complex signal form the other filter (also expensive)
just use a very coarse parabolic sine/cosine approximation and the
real signal,
the difference is really very small sonically, since the modulator
is still sine
and the radius stays around 1 so it's the effect of a small
amplitude modulation on the modulator
caused by the slight deviation of the circle.
I couldnt tell the difference when I tested it first.

You need 7 mults and 4 adds in addition to the SVF for the AP and
rotation per carrier.

But you save an envelope for each operator and have a pretty
efficient sine operator with the SVF.
And you get all the benfits of phase modulation over frequency
modulation of the
filter cutoff.
It's very useful for drum synthesis but also useful for some other
percussive sounds like "FM" pianos etc.

Here is an audio demo, with cheap "soundboard" and some other fx
added:
https://soundcloud.com/traumlos_kalt/smoke-piano-test-1-01/s-W54wz
<https://soundcloud.com/traumlos_kalt/smoke-piano-test-1-01/s-W54wz>

I wonder if this idea is new?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp
<https://lists.columbia.edu/mailman/listinfo/music-dsp>




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] (Novel?) "Modal Phase Rotation Synthesis"

2018-04-03 Thread gm


After looking at it I think probably you can but you need trig 
calculations every sample
when you change the frequency and quite some additional calculations for 
the WGR every sample

in this case.
So its cheaper to use a standard oscillator with a sine aproximation for 
phase mod.  in both cases.


The MCF seems lighter on CPU then what I do if you insist that the rotation
must be on a perfect circle instead of the parapolic shape,
but I think when used as an oscillator it has issues
with frequency accuracy or amplitude rescaling or something similar?

And it appears not to rotate on a perfect circle internally either
but just from looking at the paper I can't tell if and how that matters.

I remember years ago I investigated both for use as an undamped 
oscillator and
came to the conclusion that a fast sine approximation is superior for 
phase modulation.

But I dont recall the details.

The sine approximation I use only needs 4 multiplies so I am not sure
if I am on the right path using filters.

There seems to be an advantage with voice stealing though, the click
you get is masked and blurred by the filters response


Am 03.04.2018 um 14:37 schrieb Corey K:
Yes, I think you can do phase modulation with those filters. They are 
referred to colloquially as "phasor filters", because their phase is 
manipulated in order to rotate a vector around the complex plane...


On Tue, Apr 3, 2018 at 8:16 AM, gm <mailto:g...@voxangelica.net>> wrote:



Yes it's related, I dont recall if I used one of these filters
in my first implementation which was several years ago.
I used a complex filter before I used the SVF and AP.

But I think you can't do full phase modulation with such filters?
I think that was my motivation to apply the rotation outside of
the filter.

Either way it seems lighter on cpu when you use the external
rotation with
parabolas instead of trig operations since you dont have to constantly
adapt the internal state of the filter.

A drawback of the method in general with either filter is that
you can cancel the internal state with an impulse.

I havent figured out what the best excitation signal is.

The paper you linked suggests to delay the impulse until a zero
crossing
but that is not an option in my use cases.


Am 03.04.2018 um 01:46 schrieb Corey K:

Your idea seems to bear a few similarities to this (just in case
you haven't seen it already):
https://ccrma.stanford.edu/~jos/smac03maxjos/
<https://ccrma.stanford.edu/%7Ejos/smac03maxjos/>



On Mon, Apr 2, 2018 at 2:46 PM, gm mailto:g...@voxangelica.net>> wrote:


I don't know if this idea is new, I had it for some time but
have never seen it mentioned anywhere:

Use a filter with high q and rotate it's (complex) output by
the (real) output
of another filter to obtain a phase modulated sine wave.
Excite with an impulse or impact signal.

It's basically crossed between modal and phase modulation
synthesis.

Now there are some ideas to this to make it practical and a
useful substitute for phase modulation and FM:

You can use a state variable filter with an additional
allpass instead of
the complex filter to obtain a filter you can pitch modulate
in audio
(useful for drum synthesis ect) (or maybe the 90 shift can be
designed more efficiently
into the SVF IDK.)

Instead of expensive trig calculations for the rotation, or using
the normalized complex signal form the other filter (also
expensive)
just use a very coarse parabolic sine/cosine approximation
and the real signal,
the difference is really very small sonically, since the
modulator is still sine
and the radius stays around 1 so it's the effect of a small
amplitude modulation on the modulator
caused by the slight deviation of the circle.
I couldnt tell the difference when I tested it first.

You need 7 mults and 4 adds in addition to the SVF for the AP
and rotation per carrier.

But you save an envelope for each operator and have a pretty
efficient sine operator with the SVF.
And you get all the benfits of phase modulation over
frequency modulation of the
filter cutoff.
It's very useful for drum synthesis but also useful for some
other percussive sounds like "FM" pianos etc.

Here is an audio demo, with cheap "soundboard" and some other
fx added:
https://soundcloud.com/traumlos_kalt/smoke-piano-test-1-01/s-W54wz
<https://soundcloud.com/traumlos_kalt/smoke-piano-test-1-01/s-W54wz>

I wonder if this idea is new?

___

Re: [music-dsp] Real-time pitch shifting?

2018-05-19 Thread gm



Am 19.05.2018 um 20:19 schrieb Nigel Redmon:
Again, my knowledge of Melodyne is limited (to seeing a demo years 
ago), but I assume it’s based on somewhat similar techniques to those 
taught by Xavier Serra (https://youtu.be/M4GRBJJMecY)—anyone know for 
sure?


I always thought the seperation of notes was based on cepstrum?
My idea is that a harmonic tone, comb like in the spectrum, is a peak in 
the cepstrum. (isn't it?)

Probably then you can also track pitch by following a peak in the cepstrum.
Not sure if this makes sense?
I never tried Melodyne in person so I am not sure what it is capable of.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Blend two audio

2018-06-18 Thread gm




Am 18.06.2018 um 16:46 schrieb Sound of L.A. Music and Audio:
Signal Power is not equivalent to audio power and this again is not 
the same as expericenced loudness and this again is not the same as 
musical loudness impression in the a contex of a track. These are 4 
"different shoes" , as we say in germany.

We actually say "pairs of shoes".

I find that in practice a cosine/sine fade works very well for 
uncorrelated signals.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Blend two audio

2018-06-18 Thread gm



Am 19.06.2018 um 02:52 schrieb robert bristow-johnson:
 Olli Niemitalo had some ideas in that thread.  dunno if there is a 
music-dsp archive anymore or not.


This thread?
https://music.columbia.edu/pipermail/music-dsp/2011-July/thread.html#69971

old list archives are here
https://music.columbia.edu/pipermail/music-dsp/
and new archives are here
https://lists.columbia.edu/pipermail/music-dsp/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] wavetable filtering

2018-06-29 Thread gm

You could use FFT where you can also make the waves symmetric
which prevents phase cancellations when you blend waves.


Am 29.06.2018 um 16:19 schrieb alexandre niger:


Hello everyone,

I just joined the list in order to find help in making a wavetable 
synth. This synth would do both morphing and frequency wavetables. 
Morphing is a way to play different waveforms over time and so to get 
an additional sound dimension. Frequency wavetables are used to avoid 
aliasing by filtering harmonics the higher the frequency go. I started 
with the frequency wavetables and then I will do the morphing between 
different waveforms.


As an intern at Rebel Technology, I started making wavetables patches 
from earlevel articles. In those patches, common waveforms are 
generated inside the code (tri, square, saw). Now I want to generate 
some more complex waveforms from an editor called WaveEdit (free). 
They come as 256 samples single cycle .wav files. Then I change them 
into static data in a header file. Once I have this, I can start with 
frequency wavetables. The key point of frequency wavetables is the 
filtering. I have to filter enough harmonics so the aliased 
frequencies do not come back under 18KHz (audiable range). But I must 
not filter too much if I don't want to get gain loss.


At the moment I managed to make a 3550 taps FIR to filter every octave 
wavetables. Unfortunately, with complex harmonic rich waveforms, I 
still have audiable aliasing from 2kHz and gain/amplitude difference 
when wavetables cross.

So now I am wondering:
About aliasing, should I cascade two FIR instead of increasing the taps?
That could be a solution if the stop band is not attenuated enough. 
According to octave fir1, stop band attenuation is 50dB to 70dB.
About gain loss, will "harmonic rich signal" always sound lower when 
filtered even if gain is the same?

I haven't normalize wavetables yet. I might have my answer afterwards.

You can have a look at the patch code but you won't be able to try it 
if you don't get Rebel Technology Owl files.

https://github.com/alexniger/OLWpatches-RebelTech/blob/master/WTExtWavePatch.hpp
All other links are in the readme.

Best,

Alex




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] wavetable filtering

2018-07-01 Thread gm



7th octave, but 127th harmonic

harmonics are not octaves but multiples of the fundamental


Am 01.07.2018 um 14:00 schrieb Martin Klang:


I'm surprised it only outputs 256 sample waveforms. Does that not mean 
that you can only go up to the 7th harmonic?




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] WSOLA on RealTime

2018-09-27 Thread gm


I had different solution, where the lag is reset to zero during a 
musical period.


Kind of a tape speed-up effekt without the pitch change.

Not always useful though.


Am 26.09.2018 um 23:25 schrieb Jacob Penn:

Ahh yeah I gotcha,

Yes, in the case of slow down, there Is a finite amount youb> slow down based 
on the size of the circular buffer of input data in use.

In my personal applications I offer users the ability to restart the 
stretch from the writehead at a musical value. Conveniently the 
slowest rate for this control will stop the overflow ; )


Can sound quite nice!

Best,

insignia 
JACOB PENN.MUMUKSHU
612.388.5992

On September 26, 2018 at 2:21:29 PM, robert bristow-johnson 
(r...@audioimagination.com ) wrote:





 Original Message 


Subject: Re: [music-dsp] WSOLA on RealTime
From: "Jacob Penn" mailto:penn.ja...@gmail.com>>
Date: Wed, September 26, 2018 5:00 pm
To: r...@audioimagination.com 
music-dsp@music.columbia.edu 
--

> You can indeed do it on real time audio but the tricks is like the 
previous
> email, youb>> youb>> > be lacking the necessary information to move faster across the 
buffer from

> the write head position.
>
> Youb>> speed it
> up.

no, even if you slow it down, any finite-sized buffer will eventually 
overflow.B  i presume you mean time-scaling (not pitch shifting) using 
WSOLA.


by "real-time", i mean live samples going in and (assuming no sample 
rate conversion) the same number of samples going out in a given 
period of time.B  with an upper bound of delay (and the lower bound is 
imposed by causality) and the process can run indefinitely.B  so if 
you're slowing down audio in real-time and you're running this 
process for a day.B  or for a year.



--

r b-j r...@audioimagination.com 

"Imagination is more important than knowledge."

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu 
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] WSOLA on RealTime

2018-09-27 Thread gm


when you slow the resynth signal down (time stretch)

you will eventually run out of buffer memory.

In this case I reduce the lag of the signal (the time stretch) to 0

within a certain time, which is determined by the bpm of the host and

a given time measured in beats, for instance, 1 quarter or one 1/16.

This gives an inverse time stretch (time compression) which is similar

effekt to a tape speeding up, but with a fixed pitch.

This is an implementation detail which is not relevant to your basic 
question,


and it's probabaly not the best solution either.

Your question was, do you need resampling, and my answer to that is

you do *not* need *proper* resampling with filtering etc, you just play

your grains at different speeds like in a sampler, so you need interpolation

between samples. HTH

I found a very efficient implementation for WSOLA, that is,

for the similarity problem, which works without correlation in my case,

and is very well suited for real time implemantation,

but unfortunately I can not discuss it in detail at the moment.







Am 27.09.2018 um 15:58 schrieb alex dashevski:


Hi,

I don't understand what do you mean. Could you explain ?

2018-09-27 16:17 GMT+03:00 gm <mailto:g...@voxangelica.net>>:


I had different solution, where the lag is reset to zero during a
musical period.

Kind of a tape speed-up effekt without the pitch change.

Not always useful though.

Am 26.09.2018 um 23:25 schrieb Jacob Penn:

Ahh yeah I gotcha,B

Yes, in the case of slow down, there Is a finite amount youb
to slow down based on the size of the circular buffer of input
data in use.

In my personal applications I offer users the ability to
restart the stretch from the writehead at a musical value.
Conveniently the slowest rate for this control will stop the
overflow ; )

Can sound quite nice!

Best,B

insignia <http://jakemumu.github.io/>

JACOB PENN.MUMUKSHU
612.388.5992

On September 26, 2018 at 2:21:29 PM, robert bristow-johnson
(r...@audioimagination.com <mailto:r...@audioimagination.com>)
wrote:



 Original Message

Subject: Re: [music-dsp] WSOLA on RealTime
From: "Jacob Penn" mailto:penn.ja...@gmail.com>>
Date: Wed, September 26, 2018 5:00 pm
To: r...@audioimagination.com <mailto:r...@audioimagination.com>
music-dsp@music.columbia.edu
<mailto:music-dsp@music.columbia.edu>

--

> You can indeed do it on real time audio but the tricks
is like the previous
> email, youb pitching things up, as youb > be lacking the
necessary information to move faster across the buffer from
> the write head position.
>
> Youb signal, and not speed it
> up.

no, even if you slow it down, any finite-sized buffer will
eventually overflow.B i presume you mean time-scaling (not
pitch shifting) using WSOLA.

by "real-time", i mean live samples going in and (assuming
no sample rate conversion) the same number of samples
going out in a given period of time.B with an upper bound
of delay (and the lower bound is imposed by causality) and
the process can run indefinitely.B so if you're slowing
down audio in real-time and you're running this process
for a day.B or for a year.


--

r b-jB B B B B B B B B B B B B r...@audioimagination.com
<mailto:r...@audioimagination.com>

"Imagination is more important than knowledge."
B

B

B

B

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
<mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp

___

dupswapdrop: music-dsp mailing list

music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>

https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Resampling

2018-10-06 Thread gm

Your numbers don't make sense to me but probably I just dont understand it.

The latency should be independent of the sample rate, right?

You search for similarity in the wave, chop it up, and replay the grains 
at different speeds and/or rates.


What you need for this is a certain amount of time of the wave.

If you need a latency of <= 100 ms you can have two wave cycles stored
of 50ms length / 20 Hz, which should be sufficient, given taht voice is 
ususally well above 20 Hz.



Am 06.10.2018 um 13:45 schrieb Alex Dashevski:
I have project with pitch shifting (resampling with wsola), It 
implements on android NDK.
Since duration of pitch is ~20ms, I can't use system recommendedB  
parameters for the fast path. for example, for my device: 
SampleRate:48Khz and buffer size 240 samples. That means, duration 
time is 5ms (< pitch duration = 20ms).
What can I do so I can use recommended parameters because it increases 
latency. For example if I use 48Khz and 240 samples then latency is 66 
ms but if buffer size is 24000 samples then latency is 300ms.

I need latency < 100ms.

Thanks,
Alex

b> 
You've got it backwards -- downsample means fewer samples. If you
have a 240-sample buffer at 48kHz, then resample to 8kHz, you'll
have 240/6=40 samples.

-Ethan


On Sat, Oct 6, 2018 at 4:10 AM, Alex Dashevski mailto:alexd...@gmail.com>> wrote:

Hi,
Let's assume that my system has sample rate = 48Khz and audio
buffer size = 240 samples. It should be on RealTime.
Can I do that:

1. Dowsampe to 8Khz and buffer size should be 240*6
2. To do proccessing on buffer 240*6 with 8Khz sample rate.
3. Upsample to 48khz with original buffer size.

Thanks,
Alex


b> 
I have only used libraries for resampling myself. I
haven't looked at their source, but it's available. The
two libraries I'm aware of are at
http://www.mega-nerd.com/SRC/download.html
and

https://kokkinizita.linuxaudio.org/linuxaudio/zita-resampler/resampler.html

perhaps they can give you some insight.

On Wed, Oct 3, 2018 at 2:46 PM Alex Dashevski
mailto:alexd...@gmail.com>> wrote:

I wrote on android ndk and there is fastpath concept.
Thus, I think that resampling can help me.
Can you recommend me code example ?
Can you give me an example of resampling ? for example
from 48Khz to 8Khz and 8Khz to 48Khz.
I found this:
https://dspguru.com/dsp/faqs/multirate/resampling/
but it is not enough clear for me,

Thanks,
Alex


b> Jacksonb> 
b>


On Wed, Oct 3, 2018 at 3:17 AM Alex Dashevski
mailto:alexd...@gmail.com>>
wrote:


if I do resampling before and after
processing. for example, 48Khz -> 8Khz and
then 8Khz -> 48Khz then will it help ?


Lowering sample rate can help achieve lower
latencies by giving you fewer samples to process
in the same amount of time but just downsampling
and then upsampling back doesn't really have any
effect.


I don't understand why I need filter, This is
to prevent alias but I can't understand why ?

Technically you only need a filter if your signal
has information above the nyquist frequency of the
lowest rate but this is not usually the case.B  I
think wikipedia explains aliasing pretty well:

https://en.wikipedia.org/wiki/Aliasing#Sampling_sinusoidal_functions
. Once the high frequency information aliases it
cannot be recovered by resampling back to the
higher rate and your lower band information is now
mixed in with the aliased information. The filter
removes this high freqency data so that the low
band stays clean through the whole process.


Is there option to decrease latency or delay ?


The only way to reduce latency in your algorithm
(unless there is some error in the implementation)
is to reduce the block size, so you process 128
samples rather than 240. 240 isn't a very large
amount of latency for a pitch shifter which is
typically a CPU intensive process and therefore
most implementations have relatively high latencies.


Re: [music-dsp] Resampling

2018-10-06 Thread gm


In my example, the buffer is 2 times as long as the lowest possible pitch,
for example if your lowest pitch is 20 Hz, you need 50 ms for one wave cycle

Think of it as magnetic tape, without sample rate, the minimum requierd 
latency and the buffer length in milliesconds

are independent of sample rate
You have 100 ms "magnetic tape", search for similarity, and then chop 
the tape according to that.

Then you have sippets of 50 ms length or smaller.
Then you copy these snippets and piece them together again, at a higher 
or slower rate than before.
You can also shrink or lengthen the snippets and change the formants, 
that ist shift all

spectral contant of one snippet up or down.




Am 06.10.2018 um 17:58 schrieb Alex Dashevski:

Hi,

I can't understand your answer.B  The duration of buffer should be 
bigger than duration of pitch because I use WSOLA.

The latency also depends on sample rate and buffer length.

Thanks,
Alex

b> b>
Your numbers don't make sense to me but probably I just dont
understand it.

The latency should be independent of the sample rate, right?

You search for similarity in the wave, chop it up, and replay the
grains at different speeds and/or rates.

What you need for this is a certain amount of time of the wave.

If you need a latency of <= 100 ms you can have two wave cycles stored
of 50ms length / 20 Hz, which should be sufficient, given taht
voice is ususally well above 20 Hz.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Resampling

2018-10-06 Thread gm


no, you don't change the buffer size, you just change the playback rate 
(and speed, if you want) of your grains.


For instance, lets say the pitch is 20 Hz, or 50 ms time for one cycle.

You want to change that to 100 Hz.

Then you take 50 ms of audio, and replay this 5 times every 10 ms (with 
or without overlaps, but at the same speed


as the original to maintaine the formants).

Then you take the next 50 ms, and do that again.

For this, you need a buffer size of 50 ms or more.

But to compare two different wave cycles of 50 ms length to find 
similarity, you need a buffer size of 100 ms.


That is your latency required, for 20 Hz.

That is all independent of sample rate, but of course your buffer size 
in samples will be larger for a higher sample rate


and smaller for a lower sample rate. But the times of latency required 
will be the same.


Also, if you to correlation you need less values to calcuate for a lower 
sample rate.



Am 06.10.2018 um 18:27 schrieb Alex Dashevski:

I still don't understand. You change buffer size. Right ?
But I don't want to change.

b> b>

In my example, the buffer is 2 times as long as the lowest
possible pitch,
for example if your lowest pitch is 20 Hz, you need 50 ms for one
wave cycle

Think of it as magnetic tape, without sample rate, the minimum
requierd latency and the buffer length in milliesconds
are independent of sample rate
You have 100 ms "magnetic tape", search for similarity, and then
chop the tape according to that.
Then you have sippets of 50 ms length or smaller.
Then you copy these snippets and piece them together again, at a
higher or slower rate than before.
You can also shrink or lengthen the snippets and change the
formants, that ist shift all
spectral contant of one snippet up or down.




Am 06.10.2018 um 17:58 schrieb Alex Dashevski:

Hi,

I can't understand your answer.B The duration of buffer should be
bigger than duration of pitch because I use WSOLA.
The latency also depends on sample rate and buffer length.

Thanks,
Alex

b href="mailto:g...@voxangelica.net "
moz-do-not-send="true">g...@voxangelica.net
b

Your numbers don't make sense to me but probably I just dont
understand it.

The latency should be independent of the sample rate, right?

You search for similarity in the wave, chop it up, and replay
the grains at different speeds and/or rates.

What you need for this is a certain amount of time of the wave.

If you need a latency of <= 100 ms you can have two wave
cycles stored
of 50ms length / 20 Hz, which should be sufficient, given
taht voice is ususally well above 20 Hz.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu 
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Resampling

2018-10-06 Thread gm




Am 06.10.2018 um 19:07 schrieb Alex Dashevski:

What do you mean "replay" ? duplicate buffer ?


I mean to just read the buffer for the output.
So in my example you play back 10 ms audio (windowed of course), then 
you move your read pointer and play
that audio back again, and so on, untill the next "slice" or "grain" or 
"snippet" of audio is played back.


I have the opposite problem. My original buffer size doesn't contain 
full cycle of the pitch.


then your pitch is too low or your buffer too small - there is no way 
around this, it's physics / causality.
You can decrease the number of samples of the buffer with a lower sample 
rate,

but not the duration/latency required.


How can I succeed to shift pitch ?


You wrote you can have a latency of < 100ms, but 100ms should be 
sufficient for this.




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Resampling

2018-10-06 Thread gm


right

the latency required is that you need to store the complete wavecycle, 
or two of them, to compare them


(My method works a little bit different, so I only need one wavecycle.)

So you always have this latency, regardless what sample rate you use.

But maybe you dont need 20 Hz, for speech for instance I think that 100 
or even 150 Hz is sufficient? I dont know




Am 06.10.2018 um 19:34 schrieb Alex Dashevski:

If I understand correctly, resampling will not help. Right ?
No other technique that will help. Right ?
What do you mean "but not the duration/latency required" ?

b> b>


Am 06.10.2018 um 19:07 schrieb Alex Dashevski:
> What do you mean "replay" ? duplicate buffer ?

I mean to just read the buffer for the output.
So in my example you play back 10 ms audio (windowed of course), then
you move your read pointer and play
that audio back again, and so on, untill the next "slice" or
"grain" or
"snippet" of audio is played back.

> I have the opposite problem. My original buffer size doesn't
contain
> full cycle of the pitch.

then your pitch is too low or your buffer too small - there is no way
around this, it's physics / causality.
You can decrease the number of samples of the buffer with a lower
sample
rate,
but not the duration/latency required.

> How can I succeed to shift pitch ?

You wrote you can have a latency of < 100ms, but 100ms should be
sufficient for this.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu 
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Resampling

2018-10-06 Thread gm


You can "freeze" audio with the phase vocoder "for ever" if that ist 
what you want to do.


You just keep the magnitude of the spectrum from one point in time and 
keep it


and update the phases with the phase differences of that moment.



Am 06.10.2018 um 20:02 schrieb Alex Dashevski:

Hi,
phase vocoder doesn't have restriction of duration ?
Thanks,
Alex

b> 
You could try a phase vocoder instead of WSOLA for time
stretching. Latency would be the size of the fft block.

El sC!b., 6 oct. 2018 19:49, gm mailto:g...@voxangelica.net>> escribiC3:


right

the latency required is that you need to store the complete
wavecycle, or two of them, to compare them

(My method works a little bit different, so I only need one
wavecycle.)

So you always have this latency, regardless what sample rate
you use.

But maybe you dont need 20 Hz, for speech for instance I think
that 100 or even 150 Hz is sufficient? I dont know



Am 06.10.2018 um 19:34 schrieb Alex Dashevski:

If I understand correctly, resampling will not help. Right ?
No other technique that will help. Right ?
What do you mean "but not the duration/latency required" ?

b href="mailto:g...@voxangelica.net
<mailto:g...@voxangelica.net>"
moz-do-not-send="true">g...@voxangelica.net
<mailto:g...@voxangelica.net>b



Am 06.10.2018 um 19:07 schrieb Alex Dashevski:
> What do you mean "replay" ? duplicate buffer ?

I mean to just read the buffer for the output.
So in my example you play back 10 ms audio (windowed of
course), then
you move your read pointer and play
that audio back again, and so on, untill the next "slice"
or "grain" or
"snippet" of audio is played back.

> I have the opposite problem. My original buffer size
doesn't contain
> full cycle of the pitch.

then your pitch is too low or your buffer too small -
there is no way
around this, it's physics / causality.
You can decrease the number of samples of the buffer with
a lower sample
rate,
but not the duration/latency required.

> How can I succeed to shift pitch ?

You wrote you can have a latency of < 100ms, but 100ms
should be
sufficient for this.



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
<mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] FFT for realtime synthesis?

2018-10-23 Thread gm

Does anybody know a real world product that uses FFT for sound synthesis?
Do you think its feasable and makes sense?

Totally unrelated to the recent discussion here I consider replacing (WS)OLA
granular "clouds" with a spectral synthesis and was wondering if I 
should use FFT for that.


I want to keep all the musical artefacts of the granular approach when 
desired
and I was thinking that you can get the "grain cloud" sound when you add 
noise to the phases/frequencies

for instance and do similar things.


An advantage of using FFT instead of sinusoids would be that you dont 
have to worry
about partial trajectories, residual noise components and that sort of 
thing.


Whether or not it would use much less CPU I am not sure, depends on how 
much overlap

of frames you have.

Disadvantages I see is latency, even more so if you want an even workload,
and that the implementation is somewhat fuzzy/messy when you do a 
timestretch followed by resampling.


Another disadvantage would be that you cant have immediate parameter changes
since everything is frame based, and even though some granularity is 
fine for me
the granularity of FFT would be fixed to the overlap/frame size, which 
is another disadvantage.


Another disadvantage I see is the temporal blur you get when you modify 
the sound.


Any thoughs on this? Experiences?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-23 Thread gm




Am 23.10.2018 um 23:51 schrieb gm:
An advantage of using FFT instead of sinusoids would be that you dont 
have to worry
about partial trajectories, residual noise components and that sort of 
thing.


I think I should add that I want to use it on polyphonic material or any 
source material
so sinu oscillators are probably not the way to go cuase you would need 
too many of them

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-23 Thread gm




Am 24.10.2018 um 02:12 schrieb gm:



Am 24.10.2018 um 00:38 schrieb David Olofson:

Simple demo song + some comments here:
https://soundcloud.com/david-olofson/eelsynth-ifft-flutesong


sounds quite nice actually


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-23 Thread gm



Am 24.10.2018 um 02:24 schrieb gm:



Am 24.10.2018 um 00:46 schrieb robert bristow-johnson:


> Does anybody know a real world product that uses FFT for sound 
synthesis?

> Do you think its feasable and makes sense?

so this first question is about synthesis, not modification for 
effects, right?  and real-time, correct?  so a MIDI Note-On is 
received and you want a note coming out as quickly as possible?




yes exactly


i don't know of a hardware product that does inverse FFT for 
synthesis.  i do know of a couple of effects algorithms that go into 
products that use FFT.  i think it's mostly about doing "fast 
convolution" for purposes of reverb.


what are you intending to synthesize?  notes?  or something more wild 
than that?  just curious.




basically a sample "mangler", you load an arbitray sample, a loop of 
music for instance, and play back parts of it in real time,
time streched, pitch shifted, with formants corrected or altered, 
backwards, forwards
I dont need polyphonic playback, though that would be nice for some 
things
right now I do this with a granular "cloud", that is many overlapping 
grains, which can play polyphonic
or rather paraphonic, which means that the grains play pack at 
different pitches simultanously,

depending on the chords you play
but they all play back from the same sample position and have the same 
kind of "treatment" like envelope or filtering.


I thought you could maybe do this and some other stuff in the spectral 
domain


the idea is to change snippets / loops of existing music into new 
music, this idea is not new,

two demo tracks
https://soundcloud.com/transmortal/the-way-you-were-fake
https://soundcloud.com/traumlos-kalt/the-way-we-were-iii

they are mostly made from a snippet of Nancy Sinatras Fridays Child



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] OT List Reply To

2018-10-23 Thread gm
It's quite a nuisance that the lists reply to is set to the person who 
wrote the mail

and not to the list adress


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-23 Thread gm




Am 24.10.2018 um 02:48 schrieb gm:

two demo tracks
https://soundcloud.com/transmortal/the-way-you-were-fake
https://soundcloud.com/traumlos-kalt/the-way-we-were-iii

they are mostly made from a snippet of Nancy Sinatras Fridays Child


I just realize in case s.o. is really interested, I have to be more precise:

bass and synthstrings that come in later on the second track are 
ordinary synths


the rest ist granular, samples snippets are from Fridays Child, Some 
Velvet Morning
and Summer Vine by Nancy Sinatra, Robots by Balanscu Quartett and a 
synth sample


I made so many demo tracks the past days, most of them were made with 
the Fridays Child sample
which has the advantage of being old school hardcore stereo, so you get 
three

different sources from the same time ...

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-25 Thread gm

I made a quick test,
original first, then resynthesized with time stretch and pitch shift and 
corrected formants:


https://soundcloud.com/traumlos_kalt/ft-resynth-test-1-01/s-7GCLk
https://soundcloud.com/traumlos_kalt/ft-resynth-test-2-01/s-2OJ2H

sounds quite phasey and gurgely
I am using 1024 FFT size and 5 bins moving average to extract a
spectral envelope for formant preservation, which is probably not the 
best way to do this


I assume you would need to realign phases at transients
sound quality isn't what you would expect in 2018...
(also I am doing the pitch shift the wrong way at the moment,
first transpose in time domain, then FFT time stretch, cause that was 
easier to do for now

but this shouldn't cause an audible problem here)

about latency I dont now yet, I am using my FFT for NI Reaktor which has 
a latency of several times FFT size

and is only good for proof of concept stuff


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-25 Thread gm

One thing I noticed is that it seems to sound better at 22050 Hz sample rate
so I assume 1024 FFT size is too small and you should use 2048.

I dont know if that is because the DC band is too high or if the bins 
are too broadband with 1024, or both?


I assume with this and some phase realingment and a better spectral envelope
quality would be somewhat improved.

Unfortunately Reaktor isn't the right tool at all to test these things, 
you have to hack around everything,
just changing the FFT size will probably waste a whole day, so I 
probably won't investigate this further.




Am 25.10.2018 um 12:17 schrieb gm:

I made a quick test,
original first, then resynthesized with time stretch and pitch shift 
and corrected formants:


https://soundcloud.com/traumlos_kalt/ft-resynth-test-1-01/s-7GCLk
https://soundcloud.com/traumlos_kalt/ft-resynth-test-2-01/s-2OJ2H

sounds quite phasey and gurgely
I am using 1024 FFT size and 5 bins moving average to extract a
spectral envelope for formant preservation, which is probably not the 
best way to do this


I assume you would need to realign phases at transients
sound quality isn't what you would expect in 2018...
(also I am doing the pitch shift the wrong way at the moment,
first transpose in time domain, then FFT time stretch, cause that was 
easier to do for now

but this shouldn't cause an audible problem here)

about latency I dont now yet, I am using my FFT for NI Reaktor which 
has a latency of several times FFT size

and is only good for proof of concept stuff


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-25 Thread gm



here an example at 22050 hz sample rate, FFT size 1024, smoothing for 
the spectral envelope 10 bins,


and simple phase realignment: when amplitude is greater than last frames 
amplitude


phase is set to original phase, otherwise to the accumulated phase of 
the time stretch


didn't expect this wo work but it seems to work

It seems to sound better to me, but still not as good as required:

https://soundcloud.com/traumlos_kalt/ft-resynth-test-3-phasealign-1-22k-01/s-KCHeV


Am 25.10.2018 um 17:58 schrieb gm:
One thing I noticed is that it seems to sound better at 22050 Hz 
sample rate

so I assume 1024 FFT size is too small and you should use 2048.

I dont know if that is because the DC band is too high or if the bins 
are too broadband with 1024, or both?


I assume with this and some phase realingment and a better spectral 
envelope

quality would be somewhat improved.

Unfortunately Reaktor isn't the right tool at all to test these 
things, you have to hack around everything,
just changing the FFT size will probably waste a whole day, so I 
probably won't investigate this further.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-25 Thread gm



the same sample as before, rearranged and sequenced, transposed

sound quality and latency aside, I think the idea has some potential
https://soundcloud.com/traumlos_kalt/spectromat-test-4-01/s-7W2tR

the second part is from Nancy Sinatras Summervine

I am sorry it's all drenched in a resonant modualated delay effect,
but I think you get the idea

Am 25.10.2018 um 19:13 schrieb gm:


here an example at 22050 hz sample rate, FFT size 1024, smoothing for 
the spectral envelope 10 bins,


and simple phase realignment: when amplitude is greater than last 
frames amplitude


phase is set to original phase, otherwise to the accumulated phase of 
the time stretch


didn't expect this wo work but it seems to work

It seems to sound better to me, but still not as good as required:

https://soundcloud.com/traumlos_kalt/ft-resynth-test-3-phasealign-1-22k-01/s-KCHeV 




Am 25.10.2018 um 17:58 schrieb gm:
One thing I noticed is that it seems to sound better at 22050 Hz 
sample rate

so I assume 1024 FFT size is too small and you should use 2048.

I dont know if that is because the DC band is too high or if the bins 
are too broadband with 1024, or both?


I assume with this and some phase realingment and a better spectral 
envelope

quality would be somewhat improved.

Unfortunately Reaktor isn't the right tool at all to test these 
things, you have to hack around everything,
just changing the FFT size will probably waste a whole day, so I 
probably won't investigate this further.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] FFT for realtime synthesis?

2018-10-25 Thread gm




Am 25.10.2018 um 12:17 schrieb gm:

(also I am doing the pitch shift the wrong way at the moment,
first transpose in time domain, then FFT time stretch, cause that was 
easier to do for now

but this shouldn't cause an audible problem here)



Now I think that flaw is actually the way to go

Instead of doing it the standard way,

FFT time stretch & filtering -> time domain pitch shift

where you need an uneven workload (not a fixed number of FFTs/Second) 
and additional latency

to write the waveform before you can read and transpose it

My proposal is

Offlineprocess:
FFT convert to spectrum with amplitude, phase and phase derivative
-> create multisample (multispectra), one spectrogramme per half octave

Realtimeprocess:
select multispectrum -> iFFT timestretch and pitch shift in frequency domain
(without moving content from bin to bin, hence the multispectrum for 
each 1/2 octave)


this way you have an even workload (fixed number of FFTs/second), and 
latency

is just the time you allow for the iFFT, can be as short as 1 sample

8-)
Posting this here to prevent patents ;-) , but what do you think, do I 
make sense?


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] spectral envelope Re: FFT for realtime synthesis?

2018-10-26 Thread gm



it seems that my artefacts have mostly to do with the spectral envelope.

What would be an efficient way to extract a spectral envelope when
you ha e stream of bins, that is one bin per sample, repeating

0,1,2,... 1023,0,1,2...
and the same stream backwards

1023,1022,...0,1023,1022...

?

I was using recursive moving average on the stream of amplitudes,
forwards and backwards, but that doesn't work so well
It turns out that the recursive filter can assume negative values even 
though the input is all positive.
Replacing it with a FIR average fiter worked but theres still room for 
improvement.


I dont want to use cepstral filtering for several reasons, it should be 
simple yet efficient..

(complexitiy, latency, cpu)

any ideas?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] spectral envelope Re: FFT for realtime synthesis?

2018-10-26 Thread gm



here I am using 5 point average on the lower bands and 20 point on the 
higher bands


doesn't sound too bad now, but I am still looking for a better solution

https://soundcloud.com/traumlos_kalt/spectromat-4-test/s-3WxpJ


Am 26.10.2018 um 19:50 schrieb gm:


it seems that my artefacts have mostly to do with the spectral envelope.

What would be an efficient way to extract a spectral envelope when
you ha e stream of bins, that is one bin per sample, repeating

0,1,2,... 1023,0,1,2...
and the same stream backwards

1023,1022,...0,1023,1022...

?

I was using recursive moving average on the stream of amplitudes,
forwards and backwards, but that doesn't work so well
It turns out that the recursive filter can assume negative values even 
though the input is all positive.
Replacing it with a FIR average fiter worked but theres still room for 
improvement.


I dont want to use cepstral filtering for several reasons, it should 
be simple yet efficient..

(complexitiy, latency, cpu)

any ideas?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] spectral envelope Re: FFT for realtime synthesis?

2018-10-27 Thread gm



Now I do it like this, 4 moving average FIRs,
5, 10, 20 and 40 taps
and a linear blend between them based on log2 of the bin number

I filter forwards and backwards, backwards after the shift of the bins 
for formant shifting
the shift is done reading with a linear interpolation from the forward 
prefilterd bins


not very scientific but it works, though there is quite some room for 
improvements,

sonically...

here is how it sounds with a 1024 FFT at 22050 kHz SR with four overlaps:

https://soundcloud.com/traumlos_kalt/spectronmat-4e-2b-01/s-DM4kQ

first transpositions with corrected formants, then extreme formant shifting

I am not sure about the sound quality, it's still not good enogh for a 
product,
I think you need 8 overlaps to reduce granularity, and a better spectral 
envelope

and a better transient detection
(I cant do this in Reaktor though, the structure it will get too messy 
and latency way too much)


any comments, ideas for improvments are appreciated
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



[music-dsp] pitch shifting in frequency domain Re: FFT for realtime synthesis?

2018-10-27 Thread gm



Now I tried pitch shifting in the frequency domain instead of time
domain to get rid of one transform step, but it sounds bad and phasey etc.

I do it like this:

multiply phase difference with frequency factor and add to accumulated 
phase,

and shift bins according to frequency factor

again there is a formant correction, and the phase is reset to the 
original phase

if the amplitude is larger than it was in the previous frame

with a 1024 FFT size it doesn't work at 44 kHz, it works @ 22050 kHz but 
sounds

like there is a flanger going on, and especially the bass seems odd
https://soundcloud.com/traumlos_kalt/freq-domain-p-shift-test-1/s-QZBEr

first original then resynthesized

is this the quality that is to be expected with this approach?
am I doing it the right way?


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] pitch shifting in frequency domain Re: FFT for realtime synthesis?

2018-10-28 Thread gm




Am 28.10.2018 um 10:46 schrieb Scott Cotton:
- the quantised pitch shift is only an approximation of a continuous 
pitch shift because
the sinc shaped realisation of a pure sine wave in the quantised 
frequency domain can occur
at different distances from the bin centers for different sine waves, 
shifting bins doesn't do this

and thus isn't 100% faithful.


I think this is one of the problems, frequency wise it seems to work 
better at 11025 Hz sample rate with 1024 FFT size

so I assume you would really need 4096 and 8 overlaps minimum for 44 kHz
Its hard to tell because I can't test more than 4 overlaps in Reaktor 
right now, it will get too complictated
and with that temporal spacing it's diffcult to judge if a larger FFT is 
all thats needed


I am not sure if I calculate the principal value of the phase difference 
correctly
I just wrap it back into the -pi..pi range, which seems right to me but 
maybe I am missing something


From the sound clip, I'd guess that you might have some other problems 
related to normalising the

synthesis volume/power


that's possible, but either I don't understand this point or it wouldn't 
matter so much?




The best quality commonly used pitch shift comes from a phase vocoder 
TSM: stretch the time
and then resample (or vice versa) so that the duration of input equals 
that of output.


that's what I did before but I am hoping to get something that is more 
suitable for real time,
with less latency, calculating the forward transform and spectral 
envelopes offline




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] pitch shifting in frequency domain Re: FFT for realtime synthesis?

2018-10-28 Thread gm

to sum it up, assumptions:

- for the phase vocoder approach you need an FFT size of 4096 @ 44.1 kHz 
SR and

- 8 or rather 16 overlaps at this FFT size and SR for a decent quality

- you need two up to 200 tap FIR filters for a spectral envelope
on an ERB scale (or similar) at this FFT size (you can precalculate this 
offline though)


- if you calculate a 4096 iFFT just in time (one bin per sample) you 
have a latency of
~100 ms with these parameters, sped up with 16 simultanoues FFT overlaps 
100/16 = ~5ms,

which would be usable

not sure if all of these assumptions are correct, but
I assume these are the reasons why we dont see so many real time 
applications

with this technique

It's doable, but on the border of what is practically useful (in a VST 
for instance) I think




Am 28.10.2018 um 14:19 schrieb gm:



Am 28.10.2018 um 10:46 schrieb Scott Cotton:
- the quantised pitch shift is only an approximation of a continuous 
pitch shift because
the sinc shaped realisation of a pure sine wave in the quantised 
frequency domain can occur
at different distances from the bin centers for different sine waves, 
shifting bins doesn't do this

and thus isn't 100% faithful.


I think this is one of the problems, frequency wise it seems to work 
better at 11025 Hz sample rate with 1024 FFT size

so I assume you would really need 4096 and 8 overlaps minimum for 44 kHz
Its hard to tell because I can't test more than 4 overlaps in Reaktor 
right now, it will get too complictated
and with that temporal spacing it's diffcult to judge if a larger FFT 
is all thats needed


I am not sure if I calculate the principal value of the phase 
difference correctly
I just wrap it back into the -pi..pi range, which seems right to me 
but maybe I am missing something


From the sound clip, I'd guess that you might have some other 
problems related to normalising the

synthesis volume/power


that's possible, but either I don't understand this point or it 
wouldn't matter so much?




The best quality commonly used pitch shift comes from a phase vocoder 
TSM: stretch the time
and then resample (or vice versa) so that the duration of input 
equals that of output.


that's what I did before but I am hoping to get something that is more 
suitable for real time,
with less latency, calculating the forward transform and spectral 
envelopes offline




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] pitch shifting in frequency domain Re: FFT for realtime synthesis?

2018-10-28 Thread gm



Am 28.10.2018 um 18:05 schrieb Scott Cotton:


- you need two up to 200 tap FIR filters for a spectral envelope
on an ERB scale (or similar) at this FFT size (you can
precalculate this
offline though)


Could you explain more about this?  What exactly are you doing with 
ERB and offline calculation of

spectral envelopes?

I am not using ERB at the moment I was thinking ahead how to do it more 
properly.


What I do at the moment is filter the amplitude spectrum with a
moving average FIR filter, I am using 5 bins average on the lower bands
and 40 bins on the highest bands, (for an FFT size of 1024 at 22050 
sample rate)

blending between filter lengths dependeing on the log 2 of the bin number.

In other words, I double the moving average filter each octave.

I filter forwards and backwards through the bins.

This is my spectral envelope, wich I use to whiten the original signal
(devide the original spectrums amplitudes with the smoothed spectrums 
amplitudes)
and then use a shifted version of the avaraged spectrum to imprint the 
corrected formants

(mulitplying, like vocoding) on the whitened spectrum.

To do this more properly, I assume that the averaging filters should be 
based on an ERB scale
though what I do is somewhar similar. Then you would need to average 
abou 200 samples

for the highest ERB bands.

My idea was to use the phase vocoder in a sample slicer, so that you can 
stretch

and pitch shift samples slices, with reconstructed formants.
For this youd could precalculate the forward FFT, the spectral 
envelopes, and the phase
differences/frequencies, so you only have the latency and CPU of the 
inverse transform.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

  1   2   >