Re: [music-dsp] resampling

2018-07-19 Thread Esteban Maestre

Hi Alex,


This is a good read:

https://ccrma.stanford.edu/~jos/resample/


Using Google, I found somebody who used the LGPL code available at 
Julius' site:


https://github.com/intervigilium/libresample


Good luck!

Esteban


On 7/19/2018 2:15 PM, Alex Dashevski wrote:

Hi,

I need to convert 48Khz to 8KHz on input and convert 8Khz to 48Khz on 
audio on output.

Could you explain how to do it ?
I need to implement this on android(NDK).
Thanks,
Alex


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
Computational Acoustic Modeling Lab
Department of Music Research, McGill University
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] parametric string synthesis

2018-03-14 Thread Esteban Maestre

Nice demos!

In

http://ieeexplore.ieee.org/document/7849104/

we point to a multi-modal string quartet multi-modal (audio, contact 
mics, mocap, video, etc)
dataset we recorded some time ago. I believe it's also listed in the 
MTG-UPF website.


As for your excitation signal, perhaps some temporary "chaos" in your
oscillator synchronization method might help with the attacks.

Cheers,

Esteban



On 3/14/2018 1:45 PM, gm wrote:

I made a little demo for parametric string synthesis I am working on:

https://soundcloud.com/traumlos_kalt/parametric-strings-test/s-VeiPk

It's a morphing oscillator made from basic "virtual analog" oscillator 
components
(with oscillator synch) to mimic the bow & string "Helmholtz" 
waveform, fed into a simplified body filter.


The body is from a cello and morphed in size for viola, cello and bass 
timbres

(I know that's not accurate).
It's made from a very sparse stereo FIR filter (32 taps).
It doesn't sound like the real instrument body response, but the 
effect still sounds somewhat physcial to me.


The idea is to replace the VA "Helmholtz" oscillator with a wavetable 
oscillator (with synch?)
which is controlled by paramterized playing styles, to be more 
flexible and more natural behaving

than sample libraries.
And a better body filter.

The advantage over waveguide modeling with a bow model would be that 
you don't have to
play the bow with accurate pressure and velocity, and that it is more 
cpu friendly
and more flexible in regards to more artificial timbres and timbre 
morphing.


So far it's a private hobby project in Reaktor 5, but it maybe has 
some potential I believe.
Doesn't sound like samples yet but maybe it will when the model is 
improved...


At least it can provide an instrument with a hybrid sound between 
virtual analog and physical
which is something I love to use in my music. I used the body filter 
with synths quite often.


So far the "Helmholtz" waveform is made from assumptions like that 
that it behaves like
a synched oscillator depending on the ratio between the two sides of 
the string,

which might not be true.

Why I am posting this:
Maybe someone here plays an electric solid body violin or something 
similar and can provide
samples of bow & string waveforms with different playing styles and 
notes for analysis?

And has an interest to join efforts to create this instrument?
Or maybe someone even knows of a source for such waveforms?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



--

Esteban Maestre
Computational Acoustic Modeling Lab
Department of Music Research, McGill University
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] tracking drum partials

2017-07-27 Thread Esteban Maestre

https://ccrma.stanford.edu/~scottl/thesis.html

Esteban


On 7/27/2017 4:02 PM, Thomas Rehaag wrote:

@Esteban:
Have you got a link to a (Levine and Smith, 98) PDF? Found the other 
one and it looks promising after a short glimpse.


--

Esteban Maestre
Computational Acoustic Modeling Lab
Department of Music Research, McGill University
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] tracking drum partials

2017-07-26 Thread Esteban Maestre

Hi Thomas,

Sinusoidal-only modeling could be limiting for some of the intrinsic 
features of percussive sounds.
A possibility would be to encode partials + noise (Serra and Smith, 89) 
or partials + noise + transients (Levine and Smith, 98).


Cheers,

Esteban



On 7/26/2017 4:37 PM, Thomas Rehaag wrote:

Dear DSP Experts,

can anybody tell me how to track drum partials? Is it even possible?
What I'd like to have are the frequency & amplitude envelopes of the 
partials so that I can rebuild the drum sounds with additive synthesis.


I've tried it with heavily overlapping FFTs and then building tracks 
from the peaks.
Feeding the results into the synthesis (~60 generators) brought 
halfway acceptable sounds. Of course after playing with FFT- and 
overlapping step sizes for a while.


But those envelopes were strange and it was very frustrating to see 
the results when I analyzed a synthesized sound containing some simple 
sine sweeps this way.
Got a good result for the loudest sweep. But the rest was scattered in 
short signals with strange frequencies.


Large FFTs have got the resolution to separate the partials but a bad 
resolution in time so you don't even see the higher partials which are 
gone within a short part of the buffer.
With small FFTs every bin is crowded with some partials. And every 
kind of mask adds the more artifacts the smaller the FFT is.


Also tried BP filter banks. Even worse!
It's always resolution in time and frequency fighting each other too 
much for this subject.


Any solution for this?

Best Regards,

Thomas

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



--

Esteban Maestre
Computational Acoustic Modeling Lab
Department of Music Research, McGill University
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Music applications/DSP online masters

2016-07-20 Thread Esteban Maestre

Hi Liam,

I believe

https://ccrma.stanford.edu/academics/masters

is an excellent program. It's not an online program, but at least it 
happens close to where you live.


At Kadenze.com you'll find online (related) courses.

Esteban

On 7/20/2016 3:11 PM, Liam Sargent wrote:

Hello all,

Been subscribed to this list for a while and have found the 
conversation fascinating. I recently graduated with a B.S. in Computer 
Science and have a strong interest in continuing my education in DSP 
programming for audio applications. I have recently started a full 
time job in the SF Bay Area as a software engineer - will likely have 
to complete course material online.


Wondering anyone on this list has recommendations for a solid online 
M.S. program focused on audio signal processing/music applications, or 
just resources for continuing my learning in general.


Liam


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Cheap spectral centroid recipe

2016-02-25 Thread Esteban Maestre

Hi there,

On 2/25/2016 3:57 PM, Evan Balster wrote:
When working with tonal signals, it has been proposed that brightness 
be normalized through division by fundamental frequency.  This 
produces a dimensionless (?) metric which is orthogonal to the tone's 
pitch, and does not typically fall below a value of one.  Whether such 
a metric corresponds more closely to brightness than the spectral 
centroid in hertz depends on a psychoacoustics question:  Do humans 
perceive brightness as a quality which is independent from pitch?


Interesting topic.

Finding a (more-or-less) universal numerical recipe that can be used to 
predict a perceptual, verbally designated attribute (in this case 
"brightness") represents in itself a difficult problem with many 
potential biases. An example is the definition of "brightness", which 
might be subject to language-specific and tone- / instrument- specific 
biases.


Regarding the methods proposed in this thread, I personally believe that 
an audio frame could be split into deterministic (partials) and 
stochastic (noise floor) components (see Xavier Serra's work from 1989), 
and propose different "centroid" measures for each of these components, 
which could then be combined in some desired way.


In any case, many researchers have studied the /orthogonality/ between 
perceived brightness and fundamental frequency in certain contexts. This 
is an example:


http://newt.phys.unsw.edu.au/~jw/reprints/SchubertWolfe06.pdf

But if I had to give a name, I would probably go for Stephen McAdams.

Cheers,
Esteban

--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Fourier and its negative exponent

2015-10-05 Thread Esteban Maestre
By the way: complex-conjugate does not mean it rotates in opposite 
direction; check out this picture:


http://www.eetasia.com/STATIC/ARTICLE_IMAGES/200902/EEOL_2009FEB04_DSP_RFD_NT_01c.gif

Rotation in opposite direction happens with negative frequencies.

Cheers,
Esteban

On 10/5/2015 8:06 PM, Stijn Frishert wrote:

Thanks Allen, Esteban and Sebastian.

My main thought error was thinking that negating the exponent was the 
complex equivalent of flipping the sign of a non-complex sinusoid (sin 
and -sin). Of course it isn’t. e^-a isn’t the same as -e^a. The real 
part of a complex sinusoid and its complex conjugate are the same, 
they only rotate in different directions.


And so the minus is to negate that rotation in the complex plane. 
Correct me if I’m wrong, of course.


Stijn

On 5 Oct 2015, at 15:51, Allen Downey <dow...@allendowney.com 
<mailto:dow...@allendowney.com>> wrote:


In Chapter 7 of Think DSP, I develop the DFT in a way that might help 
with this:


http://greenteapress.com/thinkdsp/html/thinkdsp008.html

If you think of the inverse DFT as matrix multiplication where the 
matrix, M, contains complex exponentials as basis vectors, the 
(forward) DFT is the multiplication by the inverse of M.  Since M is 
unitary, its inverse is its conjugate transpose.  The conjugation is 
the source of the negative sign, when you write the DFT in summation 
form.


Allen



On Mon, Oct 5, 2015 at 9:28 AM, Stijn Frishert 
<stijnfrish...@gmail.com <mailto:stijnfrish...@gmail.com>> wrote:


Hey all,

In trying to get to grips with the discrete Fourier transform, I
have a question about the minus sign in the exponent of the
complex sinusoids you correlate with doing the transform.

The inverse transform doesn’t contain this negation and a quick
search on the internet tells me Fourier analysis and synthesis
work as long as one of the formulas contains that minus and the
other one doesn’t.

So: why? If the bins in the resulting spectrum represent how much
of a sinusoid was present in the original signal
(cross-correlation), I would expect synthesis to use these exact
same sinusoids to get back to the original signal. Instead it
uses their inverse! How can the resulting signal not be 180 phase
shifted?

This may be text-book dsp theory, but I’ve looked and searched
and everywhere seems to skip over it as if it’s self-evident.

Stijn Frishert
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Fourier and its negative exponent

2015-10-05 Thread Esteban Maestre

"does not mean" > "does mean"

Esteban

On 10/5/2015 8:47 PM, Esteban Maestre wrote:
By the way: complex-conjugate does not mean it rotates in opposite 
direction; check out this picture:


http://www.eetasia.com/STATIC/ARTICLE_IMAGES/200902/EEOL_2009FEB04_DSP_RFD_NT_01c.gif

Rotation in opposite direction happens with negative frequencies.

Cheers,
Esteban

On 10/5/2015 8:06 PM, Stijn Frishert wrote:

Thanks Allen, Esteban and Sebastian.

My main thought error was thinking that negating the exponent was the 
complex equivalent of flipping the sign of a non-complex sinusoid 
(sin and -sin). Of course it isn’t. e^-a isn’t the same as -e^a. The 
real part of a complex sinusoid and its complex conjugate are the 
same, they only rotate in different directions.


And so the minus is to negate that rotation in the complex plane. 
Correct me if I’m wrong, of course.


Stijn


On 5 Oct 2015, at 15:51, Allen Downey <dow...@allendowney.com> wrote:

In Chapter 7 of Think DSP, I develop the DFT in a way that might 
help with this:


http://greenteapress.com/thinkdsp/html/thinkdsp008.html

If you think of the inverse DFT as matrix multiplication where the 
matrix, M, contains complex exponentials as basis vectors, the 
(forward) DFT is the multiplication by the inverse of M.  Since M is 
unitary, its inverse is its conjugate transpose.  The conjugation is 
the source of the negative sign, when you write the DFT in summation 
form.


Allen



On Mon, Oct 5, 2015 at 9:28 AM, Stijn Frishert 
<stijnfrish...@gmail.com> wrote:


Hey all,

In trying to get to grips with the discrete Fourier transform, I
have a question about the minus sign in the exponent of the
complex sinusoids you correlate with doing the transform.

The inverse transform doesn’t contain this negation and a quick
search on the internet tells me Fourier analysis and synthesis
work as long as one of the formulas contains that minus and the
other one doesn’t.

So: why? If the bins in the resulting spectrum represent how
much of a sinusoid was present in the original signal
(cross-correlation), I would expect synthesis to use these exact
same sinusoids to get back to the original signal. Instead it
uses their inverse! How can the resulting signal not be 180
phase shifted?

This may be text-book dsp theory, but I’ve looked and searched
and everywhere seems to skip over it as if it’s self-evident.

Stijn Frishert
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban  


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Fourier and its negative exponent

2015-10-05 Thread Esteban Maestre

HI Stijn,

That "minus" comes from complex-conjugate (of Euler's formula). To find 
the projection coefficients (Fourier Transform), in each of the terms in 
the summation one computes the inner product of two complex vectors: the 
complex sinusoid you are "testing", and its complex-conjugate. The 
resulting complex number (each bin is a complex number) will not only 
tell you "how much of a sinusoid was present in the original signal", 
but also its relative phase.


This is an excellent read:

https://ccrma.stanford.edu/~jos/st/

Cheers,
Esteban





On 10/5/2015 4:28 PM, Stijn Frishert wrote:

Hey all,

In trying to get to grips with the discrete Fourier transform, I have a 
question about the minus sign in the exponent of the complex sinusoids you 
correlate with doing the transform.

The inverse transform doesn’t contain this negation and a quick search on the 
internet tells me Fourier analysis and synthesis work as long as one of the 
formulas contains that minus and the other one doesn’t.

So: why? If the bins in the resulting spectrum represent how much of a sinusoid 
was present in the original signal (cross-correlation), I would expect 
synthesis to use these exact same sinusoids to get back to the original signal. 
Instead it uses their inverse! How can the resulting signal not be 180 phase 
shifted?

This may be text-book dsp theory, but I’ve looked and searched and everywhere 
seems to skip over it as if it’s self-evident.

Stijn Frishert
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Fourier and its negative exponent

2015-10-05 Thread Esteban Maestre



On 10/5/2015 6:15 PM, Esteban Maestre wrote:

the complex sinusoid you are "testing", and its complex-conjugate

Sorry:

I mean "your signal and the complex sinusoid your are testing".

Esteban

--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-18 Thread Esteban Maestre



On 8/18/2015 6:41 AM, Jerry wrote:
I would think that polynomial interpolators of order 30 or 40 would 
provide no end of unpleasant surprises due to the behavior of 
high-order polynomials. I'm thinking of weird spikes, etc. Have you 
actually used polynomial interpolators of this order?


I remember going even above 40-th order with no problems.
But I also remember having problems with 80-th order interpolation.
I think it's called the /Runge phenomenon/.

Esteban



--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Compensate for interpolation high frequency signal loss

2015-08-17 Thread Esteban Maestre

No experience with compensation filters here.
But if you can afford to use a higher order interpolation scheme, I'd go 
for that.


Using Newton's Backward Difference Formula, one can construct 
time-varying, table-free, efficient Lagrange interpolation schemes of 
arbitrary order (up to 30-th or 40-th order) which stay within linear 
complexity while allowing for run-time modulation of the interpolation 
order.


https://ccrma.stanford.edu/~jos/Interpolation/Lagrange_Interpolation.html

Cheers,
Esteban


On 8/17/2015 12:07 PM, STEFFAN DIEDRICHSEN wrote:
I could write a few lines over the topic as well, since I made such a 
compensation filter about 17 years ago.
So, there are people, that do care about that topic, but there are 
only some, that do find time to write up something.


;-)

Steffan


On 17.08.2015|KW34, at 17:50, Theo Verelst theo...@theover.org 
mailto:theo...@theover.org wrote:


However, no one here besides RBJ and a few brave souls seems to even 
care much about real subjects.




___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Calculating e^(i*phase) * (H + H')

2015-04-02 Thread Esteban Maestre

Hi MF,

It is difficult for me to understand your question or the context of 
your question.

I also don't get the derivative part.

One question: are H and H' complex conjugate, and respectively defined 
for positive and negative frequencies?


It would be helpful to know what you are trying to do,
or at least which paper are you trying to learn from.

Cheers,
Esteban

On 4/1/2015 4:07 PM, MF wrote:

Hi Forum,


I am trying to implement a formula from a paper:


Y(w) = e^(i*phase) * (H(w) + H’(w))


Where H is the fourier transform of a window function h (a blackman window
in my case), H’ is the derivative of H (in the paper, H and H' are called
spectrum motifs). A signal will then be generated from ifft(Y).


In the paper it says:


In practice, the signals to be synthesized are real, and the inverse FFT
algorithm only uses the positive frequency half spectrum, *so only one of
the two spectral motifs must be synthesized*.

I don’t understand what it means by “only one of the two spectral motifs
must be synthesized”. How do I decide which spectral motive to use?


ps. I simplified the formula for simplicity. But in case you want to see
the complete formula:


Y(wk) = e^(i*phase) * (0.5 * A * H(wk - wf) + 0.5* B *H’(wk - wf))

   for |wk - wf| = K * 2pi / N


ps2. sorry I just by accident posted the same question without a subject an
hour ago, is there a way to delete it from the archive?

Thanks in advance!!!

MF
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] oversampled Fourier Transform

2015-03-31 Thread Esteban Maestre

Hi MF,

You might be misreading the frequency-domain versions of your windows.

If you think about it, computing windows of different lengths (your 
oversampled windows) could be seen as obtaining windows of the same 
duration, but sampled /at different sample rates/.
This will lead you to getting frequency-domain signals that are 
frequency-sampled at different frequency resolutions, and therefore 
your should be interpreting your frequency-domain samples differently. 
Let me clarify:


Let's assume that you compute three different windows, w1(n), w2(n), and 
w3(n). All three windows last 1 second.
w1(n) was computed by calling blackman(10). Equivalently, w2(n) was 
computed by calling blackman(20), and w3(n) by calling blackman(40).
You can see this as having three windows at different sample rates of 
10Hz, 20Hz, and 40Hz.
Now if we take the FFT, we obtain W1[k], W2[k], and W3[k], with k 
being your frequency-domain sample (or bin).
The number of frequency-domain samples (bins) of W1[k] is 10, while 
for W2[k] we have 20 bins, and for W3[k] we have 40 bins.

From the sampling theory, we know that
- the first 5 bins of W1[k] correspond to frequencies ranging from 0Hz 
to 5Hz (i.e., half your original sample rate);
- the first 10 bins of W2[k] correspond to frequencies ranging from 0Hz 
to 10Hz (idem as above);
- the first 20 bins of W3[k] correspond to frequencies ranging from 0Hz 
to 20Hz (idem as above).
This means that the first 5 bins of all three W1[k], W2[k], and W3[k] 
will correspond to frequencies ranging from 0Hz to 5Hz.
Bingo! Now you can see that all three time-domain windows, for which the 
only difference was the sample rate at which you obtained them, present 
/similar/ behavior//in the frequency domain.
What we have learned here is that bin-to-bin magnitude (or phase) 
comparisons will only make sense if the bins correspond to the same 
frequency, and in this case they do : )
By oversampling the time-domain signal, the only thing you are doing is 
to add bins in the higher frequency region. In our example, W2[k] has 
more information than W1[k], but this new information only appears in 
the frequency region above 5Hz, which was /unknown/ to w1(n) in the 
first place.


Now, going back to your objective.
To obtain higher resolution in the lower frequency region, one could 
take W1[k] and directly resample or oversample it. This could be 
done by interpolating bins of W1[k]. Interpolation can be accomplished 
by convolving with a sinc function, which, in the time domain can be 
seen as multiplying by a rectangular function. Zero-padding is about 
changing the width of such rectangular function. As we all have done in 
the past : ), maybe you could read a bit on Windowing, Short-Time 
Fourier Transform, Zero-Padding, and Sinc Interpolation. It will all 
fall in place ; )


I totally recommend

https://www.createspace.com/3177793
https://ccrma.stanford.edu/~jos/mdft/ ,

but there are many many other good materials out there!

Cheers,
Esteban






On 3/31/2015 7:10 PM, robert bristow-johnson wrote:

On 3/31/15 6:53 PM, Justin Salamon wrote:


To expand on what Ethan wrote, it sounds like what you're trying to 
do is

zero-pad the signal:
http://www.dsprelated.com/dspbooks/mdft/Zero_Padding.html

That said, whilst zero padding will give you an interpolated spectrum in
the frequency domain, you may still miss the true location of your 
peaks,


how so?  the only mechanism for missing the true location would be 
sidelobe interference from adjacent peaks, possibly including the 
peak(s) in negative frequency.



and it will also increase the computational cost.

Another thing to look at is re-estimating the exact location of the 
peaks

in your spectrum using parabolic interpolation:
http://www.dsprelated.com/dspbooks/sasp/Quadratic_Interpolation_Spectral_Peaks.html 



quadratic interpolation of peaks (given the three discrete samples 
around the peak) is a good ol' standby.  i use it for autocorrelation 
to get the period to a fractional-sample precision.  but i don't see 
why it would be more accurate than zero-padding before the FFT.


Or, you could use the phase instead to compute the instantaneous 
frequency:

http://en.wikipedia.org/wiki/Instantaneous_phase#Instantaneous_frequency


needs a Hilbert transformer.  and you would also need to isolate the 
sinusoidal partial from the other partials.


i think so, anyway.




--

Esteban Maestre
CIRMMT/CAML - McGill Univ
MTG - Univ Pompeu Fabra
http://ccrma.stanford.edu/~esteban

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp