Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread gm


thanks for your offer, I can not really read Math lab code and always 
have a hard time

even figuring out essentials of such code

My phase vocoder already works kind of satisfactorily now as a demo in 
Native Instruments Reaktor,
I do the forward FFT offline and the iFFT "just in time", that means 12 
"butterflies" per sample,
so you could bring down latency by speeding up the iFFT, though I am not 
sure what a reasonable latency is.
I made a poll on some electronic musicians board and most people voted 
for 10 ms being just tolerable.


I am half way content with the way it works now, for analysis I have 
twelve FFTs in parallel, one for each octave
with window sizes based on ERB scale per octave, so it's not totally bad 
on transients but not good either.
I assume there is still some room for improvements on the windows, but 
not very much.


FFT size is 4096, and now I search for ways to improve it, mostly 
regarding transients.
But I am not sure if that's possible with FFT cause I still have 
pre-ringing, and I cant see
how to avoid that completely cause you can only shorten the windows on 
the low octaves so much.

Maybe with an assymetric window?
If you do the analysis with a IIR filter bank (or wavelets) you kind of 
have assymmetric windows, that is the filters
integrate in a causal way with a decaying "window" they see, but I am 
not sure if this can be adapted somehow

to an FFT.

An other way that would reduce reverberation and shorten transient times 
somehwat would
be using shorter FFTs for the resynthesis, this would also bring down 
CPU a bit and latency.


So this is where I am at at the moment

Am 09.11.2018 um 23:29 schrieb robert bristow-johnson:


i don't wanna lead you astray.  i would recommend staying with the 
phase vocoder as a framework for doing time-frequency manipulation.  
it **can** be used real-time for pitch shift, but when i have used the 
phase vocoder, it was for time-scaling and then we would simply 
resample the time-scaled output of the phase vocoder to bring the 
tempo back to the original and shift the pitch.  that was easier to 
get it right than it was to *move* frequency components around in the 
phase vocoder.  but i remember in the 90s, Jean Laroche doing that 
real time with a single PC.  also a real-time phase vocoder (or any 
frequency-domain process, like sinusoidal modeling) is going to have 
delay in a real-time process.  even if your processor is infinitely 
fast, you still have to fill up your FFT buffer with samples before 
invoking the FFT.  if your buffer is 4096 samples and your sample rate 
is 48 kHz, that's almost 1/10 second.  and that doesn't count 
processing time, just the buffering time.  and, in reality, you will 
have to double buffer this process (buffer both input and output) and 
that will make the delay twice as much. so with 1/5 second delay, 
that's might be an issue.


i offered this before (and someone sent me a request and i believe i 
replied, but i don't remember who), but if you want my 2001 MATLAB 
code that demonstrates a simple phase vocoder doing time scaling, i am 
happy to send it to you or anyone.  it's old.  you have to turn 
wavread() and wavwrite() into audioread() and audiowrite(), but 
otherwise, i think it will work.  it has an additional function that 
time-scales each sinusoid *within* every frame, but i think that can 
be turned off and you can even delete that modification and what you 
have left is, in my opinion, the most basic phase vocoder implemented 
to do time scaling.  lemme know if that might be helpful.


L8r,

r b-j

 Original Message 
Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for 
realtime synthesis?

From: "gm" 
Date: Fri, November 9, 2018 5:02 pm
To: music-dsp@music.columbia.edu
--

> You get me intrigued with this
>
> I actually believe that wavelets are the way to go for such things,
> but, besides that anything beyond a Haar wavelet is too complicated 
for me

> (and I just grasp that Haar very superficially of course),
>
> I think one problem is the problem you mentioned - don't do anything
> with the bands,
> only then you have perfect reconstruction
>
> And what to do you do with the bands to make a pitch shift or to
> preserve formants/do some vocoding?
>
> It's not so obvious (to me), my naive idea I mentioned earlier in this
> thread was to
> do short FFTs on the bands and manipulate the FFTs only
>
> But how? if you time stretch them, I believe the pitch goes down (thats
> my intuition only, I am not sure)
> and also, these bands alias, since the filters are not brickwall,
> and the aliasing is only canceled on reconstruction I believe?
>
> So, yes, very interesting topic, that could lead me astray for another
> couple of weeks but without any results I guess
>
> I think as long as I don't fully grasp all the properties of the FFT and

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread robert bristow-johnson



�
i don't wanna lead you astray.� i would recommend staying with the phase 
vocoder as a framework for doing time-frequency manipulation.� it **can** be 
used real-time for pitch shift, but when i have used the phase vocoder, it was 
for time-scaling and then we would simply
resample the time-scaled output of the phase vocoder to bring the tempo back to 
the original and shift the pitch.� that was easier to get it right than it was 
to *move* frequency components around in the phase vocoder.� but i remember in 
the 90s, Jean Laroche doing that real time with a
single PC.� also a real-time phase vocoder (or any frequency-domain process, 
like sinusoidal modeling) is going to have delay in a real-time process.� even 
if your processor is infinitely fast, you still have to fill up your FFT buffer 
with samples before invoking the FFT.� if your
buffer is 4096 samples and your sample rate is 48 kHz, that's almost 1/10 
second.� and that doesn't count processing time, just the buffering time.� and, 
in reality, you will have to double buffer this process (buffer both input and 
output) and that will make the delay twice as much.�
so with 1/5 second delay, that's might be an issue.
i offered this before (and someone sent me a request and i believe i replied, 
but i don't remember who), but if you want my 2001 MATLAB code that 
demonstrates a simple phase vocoder doing time scaling, i am happy to send it 
to you or
anyone.� it's old.� you have to turn wavread() and wavwrite() into audioread() 
and audiowrite(), but otherwise, i think it will work.� it has an additional 
function that time-scales each sinusoid *within* every frame, but i think that 
can be turned off and you can even delete that
modification and what you have left is, in my opinion, the most basic phase 
vocoder implemented to do time scaling.� lemme know if that might be helpful.
L8r,
r b-j
�
 Original Message 
Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime 
synthesis?

From: "gm" 

Date: Fri, November 9, 2018 5:02 pm

To: music-dsp@music.columbia.edu

--



> You get me intrigued with this

>

> I actually believe that wavelets are the way to go for such things,

> but, besides that anything beyond a Haar wavelet is too complicated for me

> (and I just grasp that Haar very superficially of course),

>

> I think one problem is the problem you mentioned - don't do anything

> with the bands,

> only then you have perfect reconstruction

>

> And what to do you do with the bands to make a pitch shift or to

> preserve formants/do some vocoding?

>

> It's not so obvious (to me), my naive idea I mentioned earlier in this

> thread was to

> do short FFTs on the bands and manipulate the FFTs only

>

> But how? if you time stretch them, I believe the pitch goes down (thats

> my intuition only, I am not sure)

> and also, these bands alias, since the filters are not brickwall,

> and the aliasing is only canceled on reconstruction I believe?

>

> So, yes, very interesting topic, that could lead me astray for another

> couple of weeks but without any results I guess

>

> I think as long as I don't fully grasp all the properties of the FFT and

> phase vocoder I shouldn't start anything new...

>
--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread gm

You get me intrigued with this

I actually believe that wavelets are the way to go for such things,
but, besides that anything beyond a Haar wavelet is too complicated for me
(and I just grasp that Haar very superficially of course),

I think one problem is the problem you mentioned - don't do anything 
with the bands,

only then you have perfect reconstruction

And what to do you do with the bands to make a pitch shift or to 
preserve formants/do some vocoding?


It's not so obvious (to me), my naive idea I mentioned earlier in this 
thread was to

do short FFTs on the bands and manipulate the FFTs only

But how? if you time stretch them, I believe the pitch goes down (thats 
my intuition only, I am not sure)

and also, these bands alias, since the filters are not brickwall,
and the aliasing is only canceled on reconstruction I believe?

So, yes, very interesting topic, that could lead me astray for another 
couple of weeks but without any results I guess


I think as long as I don't fully graps all the properties of the FFT and 
phase vocoder I shouldn't start anything new...


Am 09.11.2018 um 22:31 schrieb robert bristow-johnson:




what you're discussing here appears to me to be about perfect 
reconstruction in the context of Wavelets and Filter Banks.


there is a theorem that's pretty easy to prove that if you have 
complementary high and low filterbanks with a common cutoff at 1/2 
Nyquist, you can downsample both high and low-pass filterbank outputs 
by a factor of 1/2 and later combine the two down-sampled streams of 
samples to get perfect reconstruction of the original.  this result is 
not guaranteed if you **do** anything to either filter output in the 
filterbank.


 Original Message 
Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for 
realtime synthesis?

From: "gm" 
Date: Fri, November 9, 2018 4:19 pm
To: music-dsp@music.columbia.edu
--
>
> hm, my application has also WOLA ...
>
> All I find is about up- and downsampling of time sequences and spectra
> of the same length.
>

...
>
> If anyone knows of an easy explanation of down- and up sampling spectra
> it would be much appreciated.
>
> Am 09.11.2018 um 19:16 schrieb Ethan Duni:
> ..
>> The only applications I know of that tolerate time-domain aliasing in
>> transforms are WOLA filter banks - which are explicitly designed to
>> cancel these (severe!) artifacts in the surrounding time-domain
>> processing.

--

r b-j                         r...@audioimagination.com

"Imagination is more important than knowledge."


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread robert bristow-johnson







what you're discussing here appears to me to be about perfect reconstruction in 
the context of Wavelets and Filter Banks.



there is a theorem that's pretty easy to prove that if you have complementary 
high and low filterbanks with a common cutoff at 1/2 Nyquist, you can 
downsample both high and low-pass filterbank outputs by a factor of 1/2 and 
later combine the two down-sampled streams of samples to get perfect
reconstruction of the original.� this result is not guaranteed if you **do** 
anything to either filter output in the filterbank.


 Original Message 

Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime 
synthesis?

From: "gm" 

Date: Fri, November 9, 2018 4:19 pm

To: music-dsp@music.columbia.edu

--

>

> hm, my application has also WOLA ...

>

> All I find is about up- and downsampling of time sequences and spectra

> of the same length.

>
...

>

> If anyone knows of an easy explanation of down- and up sampling spectra

> it would be much appreciated.

>

> Am 09.11.2018 um 19:16 schrieb Ethan Duni:

> ..

>> The only applications I know of that tolerate time-domain aliasing in

>> transforms are WOLA filter banks - which are explicitly designed to

>> cancel these (severe!) artifacts in the surrounding time-domain

>> processing.
--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread gm


hm, my application has also WOLA ...

All I find is about up- and downsampling of time sequences and spectra 
of the same length.


Summing adjacent bins seemed to be in correspondence with lowpass 
filtering and decimation of time sequences

even though it's not the apropriate sinc filter...

If I just take every other bin it misses information that a half sized 
spectrum derived from the
original time series would have, for instance if only bin 1 had content 
in the double sized spectrum
certainly the downsized spectrum would need to reflect this as DC or 
something?


So do I have to apply a sinc filter before and then discard every other bin?

If so, can this be done with an other FFT like a cepstrum on the bins?

If anyone knows of an easy explanation of down- and up sampling spectra 
it would be much appreciated.


Am 09.11.2018 um 19:16 schrieb Ethan Duni:

gm wrote:
>This is brining up my previous question again, how do you decimate a 
spectrum

>by an integer factor properly, can you just add the bins?

To decimate by N, you just take every Nth bin.

>the orginal spectrum represents a longer signal so I assume folding
>of the waveform occurs?

Yeah, you will get time-domain aliasing unless your DFT is oversampled 
(i.e., zero-padded in time domain) by a factor of (at least) N to 
begin with. For critically sampled signals the result is severe 
distortion (i.e., SNR ~= 0dB).


>but maybe this doesn't matter in practice for some applications?

The only applications I know of that tolerate time-domain aliasing in 
transforms are WOLA filter banks - which are explicitly designed to 
cancel these (severe!) artifacts in the surrounding time-domain 
processing.


Ethan D

On Fri, Nov 9, 2018 at 6:39 AM gm > wrote:


This is brining up my previous question again, how do you decimate
a spectrum
by an integer factor properly, can you just add the bins?

the orginal spectrum represents a longer signal so I assume folding
of the waveform occurs? but maybe this doesn't matter in practice
for some applications?

The background is still that I want to use a higher resolution for
ananlysis and
a lower resolution for synthesis in a phase vocoder.


Am 08.11.2018 um 21:45 schrieb Ethan Duni:

Not sure can get the odd bins *easily*, but it is certainly
possible. Conceptually, you can take the (short) IFFT of each
block, then do the (long) FFT of the combined blocks. The even
coefficients simplify out as you observed, the odd ones will be
messier. Not sure quite how messy - I've only looked at the
details for DCT cases.

Probably the clearest way to think about it is in the frequency
domain. Conceptually, the two consecutive short DFTs are the same
as if we had taken two zero-padded long DFTs, and then
downsampled each by half. So the way to combine them is to
reverse that process: upsample them by 2, and then add them
together (with appropriate compensation for the
zero-padding/boxcar window).

Ethan D

On Thu, Nov 8, 2018 at 8:12 AM Ethan Fenn mailto:et...@polyspectral.com>> wrote:

I'd really like to understand how combining consecutive DFT's
can work. Let's say our input is x0,x1,...x7 and the DFT we
want to compute is X0,X1,...X7

We start by doing two half-size DFT's:

Y0 = x0 + x1 + x2 + x3
Y1 = x0 - i*x1 - x2 + i*x3
Y2 = x0 - x1 + x2 - x3
Y3 = x0 + i*x1 - x2 - i*x3

Z0 = x4 + x5 + x6 + x7
Z1 = x4 - i*x5 - x6 + i*x7
Z2 = x4 - x5 + x6 - x7
Z3 = x4 + i*x5 - x6 - i*x7

Now I agree because of periodicity we can compute all the
even-numbered bins easily: X0=Y0+Z0, X2=Y1+Z1, and so on.

But I don't see how we can get the odd bins easily from the
Y's and Z's. For instance we should have:

X1 = x0 + (r - r*i)*x1 - i*x2 + (-r - r*i)*x3 - x4 + (-r +
r*i)*x5 + i*x6 + (r + r*i)*x7

where r=sqrt(1/2)

Is it actually possible? It seems like the phase of the
coefficients in the Y's and Z's advance too quickly to be of
any use.

-Ethan



On Mon, Nov 5, 2018 at 3:40 PM, Ethan Duni
mailto:ethan.d...@gmail.com>> wrote:

You can combine consecutive DFTs. Intuitively, the basis
functions are periodic on the transform length. But it
won't be as efficient as having done the big FFT (as you
say, the decimation in time approach interleaves the
inputs, so you gotta pay the piper to unwind that). Note
that this is for naked transforms of successive blocks of
inputs, not a WOLA filter bank.

There are Dolby codecs that do similar with a suitable
flavor of DCT (type II I think?) - you have your encoder
going along at the usual frame rate, but if it detects a
string of stationary 

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread Ethan Duni
gm wrote:
>This is brining up my previous question again, how do you decimate a
spectrum
>by an integer factor properly, can you just add the bins?

To decimate by N, you just take every Nth bin.

>the orginal spectrum represents a longer signal so I assume folding
>of the waveform occurs?

Yeah, you will get time-domain aliasing unless your DFT is oversampled
(i.e., zero-padded in time domain) by a factor of (at least) N to begin
with. For critically sampled signals the result is severe distortion (i.e.,
SNR ~= 0dB).

>but maybe this doesn't matter in practice for some applications?

The only applications I know of that tolerate time-domain aliasing in
transforms are WOLA filter banks - which are explicitly designed to cancel
these (severe!) artifacts in the surrounding time-domain processing.

Ethan D

On Fri, Nov 9, 2018 at 6:39 AM gm  wrote:

> This is brining up my previous question again, how do you decimate a
> spectrum
> by an integer factor properly, can you just add the bins?
>
> the orginal spectrum represents a longer signal so I assume folding
> of the waveform occurs? but maybe this doesn't matter in practice for some
> applications?
>
> The background is still that I want to use a higher resolution for
> ananlysis and
> a lower resolution for synthesis in a phase vocoder.
>
> Am 08.11.2018 um 21:45 schrieb Ethan Duni:
>
> Not sure can get the odd bins *easily*, but it is certainly possible.
> Conceptually, you can take the (short) IFFT of each block, then do the
> (long) FFT of the combined blocks. The even coefficients simplify out as
> you observed, the odd ones will be messier. Not sure quite how messy - I've
> only looked at the details for DCT cases.
>
> Probably the clearest way to think about it is in the frequency domain.
> Conceptually, the two consecutive short DFTs are the same as if we had
> taken two zero-padded long DFTs, and then downsampled each by half. So the
> way to combine them is to reverse that process: upsample them by 2, and
> then add them together (with appropriate compensation for the
> zero-padding/boxcar window).
>
> Ethan D
>
> On Thu, Nov 8, 2018 at 8:12 AM Ethan Fenn  wrote:
>
>> I'd really like to understand how combining consecutive DFT's can work.
>> Let's say our input is x0,x1,...x7 and the DFT we want to compute is
>> X0,X1,...X7
>>
>> We start by doing two half-size DFT's:
>>
>> Y0 = x0 + x1 + x2 + x3
>> Y1 = x0 - i*x1 - x2 + i*x3
>> Y2 = x0 - x1 + x2 - x3
>> Y3 = x0 + i*x1 - x2 - i*x3
>>
>> Z0 = x4 + x5 + x6 + x7
>> Z1 = x4 - i*x5 - x6 + i*x7
>> Z2 = x4 - x5 + x6 - x7
>> Z3 = x4 + i*x5 - x6 - i*x7
>>
>> Now I agree because of periodicity we can compute all the even-numbered
>> bins easily: X0=Y0+Z0, X2=Y1+Z1, and so on.
>>
>> But I don't see how we can get the odd bins easily from the Y's and Z's.
>> For instance we should have:
>>
>> X1 = x0 + (r - r*i)*x1 - i*x2 + (-r - r*i)*x3 - x4 + (-r + r*i)*x5 + i*x6
>> + (r + r*i)*x7
>>
>> where r=sqrt(1/2)
>>
>> Is it actually possible? It seems like the phase of the coefficients in
>> the Y's and Z's advance too quickly to be of any use.
>>
>> -Ethan
>>
>>
>>
>> On Mon, Nov 5, 2018 at 3:40 PM, Ethan Duni  wrote:
>>
>>> You can combine consecutive DFTs. Intuitively, the basis functions are
>>> periodic on the transform length. But it won't be as efficient as having
>>> done the big FFT (as you say, the decimation in time approach interleaves
>>> the inputs, so you gotta pay the piper to unwind that). Note that this is
>>> for naked transforms of successive blocks of inputs, not a WOLA filter
>>> bank.
>>>
>>> There are Dolby codecs that do similar with a suitable flavor of DCT
>>> (type II I think?) - you have your encoder going along at the usual frame
>>> rate, but if it detects a string of stationary inputs it can fold them
>>> together into one big high-res DCT and code that instead.
>>>
>>> On Mon, Nov 5, 2018 at 11:34 AM Ethan Fenn 
>>> wrote:
>>>
 I don't think that's correct -- DIF involves first doing a single stage
 of butterfly operations over the input, and then doing two smaller DFTs on
 that preprocessed data. I don't think there is any reasonable way to take
 two "consecutive" DFTs of the raw input data and combine them into a longer
 DFT.

 (And I don't know anything about the historical question!)

 -Ethan



 On Mon, Nov 5, 2018 at 2:18 PM, robert bristow-johnson <
 r...@audioimagination.com> wrote:

>
>
> Ethan, that's just the difference between Decimation-in-Frequency FFT
> and Decimation-in-Time FFT.
>
> i guess i am not entirely certainly of the history, but i credited
> both the DIT and DIF FFT to Cooley and Tukey.  that might be an incorrect
> historical impression.
>
>
>
>  Original Message
> 
> Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for
> realtime synthesis?
> From: "Ethan Fenn" 

Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for realtime synthesis?

2018-11-09 Thread gm
This is brining up my previous question again, how do you decimate a 
spectrum

by an integer factor properly, can you just add the bins?

the orginal spectrum represents a longer signal so I assume folding
of the waveform occurs? but maybe this doesn't matter in practice for 
some applications?


The background is still that I want to use a higher resolution for 
ananlysis and

a lower resolution for synthesis in a phase vocoder.


Am 08.11.2018 um 21:45 schrieb Ethan Duni:
Not sure can get the odd bins *easily*, but it is certainly possible. 
Conceptually, you can take the (short) IFFT of each block, then do the 
(long) FFT of the combined blocks. The even coefficients simplify out 
as you observed, the odd ones will be messier. Not sure quite how 
messy - I've only looked at the details for DCT cases.


Probably the clearest way to think about it is in the frequency 
domain. Conceptually, the two consecutive short DFTs are the same as 
if we had taken two zero-padded long DFTs, and then downsampled each 
by half. So the way to combine them is to reverse that process: 
upsample them by 2, and then add them together (with appropriate 
compensation for the zero-padding/boxcar window).


Ethan D

On Thu, Nov 8, 2018 at 8:12 AM Ethan Fenn > wrote:


I'd really like to understand how combining consecutive DFT's can
work. Let's say our input is x0,x1,...x7 and the DFT we want to
compute is X0,X1,...X7

We start by doing two half-size DFT's:

Y0 = x0 + x1 + x2 + x3
Y1 = x0 - i*x1 - x2 + i*x3
Y2 = x0 - x1 + x2 - x3
Y3 = x0 + i*x1 - x2 - i*x3

Z0 = x4 + x5 + x6 + x7
Z1 = x4 - i*x5 - x6 + i*x7
Z2 = x4 - x5 + x6 - x7
Z3 = x4 + i*x5 - x6 - i*x7

Now I agree because of periodicity we can compute all the
even-numbered bins easily: X0=Y0+Z0, X2=Y1+Z1, and so on.

But I don't see how we can get the odd bins easily from the Y's
and Z's. For instance we should have:

X1 = x0 + (r - r*i)*x1 - i*x2 + (-r - r*i)*x3 - x4 + (-r + r*i)*x5
+ i*x6 + (r + r*i)*x7

where r=sqrt(1/2)

Is it actually possible? It seems like the phase of the
coefficients in the Y's and Z's advance too quickly to be of any use.

-Ethan



On Mon, Nov 5, 2018 at 3:40 PM, Ethan Duni mailto:ethan.d...@gmail.com>> wrote:

You can combine consecutive DFTs. Intuitively, the basis
functions are periodic on the transform length. But it won't
be as efficient as having done the big FFT (as you say, the
decimation in time approach interleaves the inputs, so you
gotta pay the piper to unwind that). Note that this is for
naked transforms of successive blocks of inputs, not a WOLA
filter bank.

There are Dolby codecs that do similar with a suitable flavor
of DCT (type II I think?) - you have your encoder going along
at the usual frame rate, but if it detects a string of
stationary inputs it can fold them together into one big
high-res DCT and code that instead.

On Mon, Nov 5, 2018 at 11:34 AM Ethan Fenn
mailto:et...@polyspectral.com>> wrote:

I don't think that's correct -- DIF involves first doing a
single stage of butterfly operations over the input, and
then doing two smaller DFTs on that preprocessed data. I
don't think there is any reasonable way to take two
"consecutive" DFTs of the raw input data and combine them
into a longer DFT.

(And I don't know anything about the historical question!)

-Ethan



On Mon, Nov 5, 2018 at 2:18 PM, robert bristow-johnson
mailto:r...@audioimagination.com>> wrote:

Ethan, that's just the difference between
Decimation-in-Frequency FFT and Decimation-in-Time FFT.

i guess i am not entirely certainly of the history,
but i credited both the DIT and DIF FFT to Cooley and
Tukey.  that might be an incorrect historical impression.



 Original Message

Subject: Re: [music-dsp] 2-point DFT Matrix for
subbands Re: FFT for realtime synthesis?
From: "Ethan Fenn" mailto:et...@polyspectral.com>>
Date: Mon, November 5, 2018 10:17 am
To: music-dsp@music.columbia.edu


--

> It's not exactly Cooley-Tukey. In Cooley-Tukey you
take two _interleaved_
> DFT's (that is, the DFT of the even-numbered samples
and the DFT of the
> odd-numbered samples) and combine them into one
longer DFT. But here you're
>