Re: [music-dsp] A theory of optimal splicing of audio in the timedomain.

2011-07-15 Thread Wen Xue
I have the following made-up scenarios - 
1) If I twist the 2nd half of some x(t) by 180 degrees then it becomes
orthogonal to the original x(t). How do we cross-fade it with x(t)?
2) If I twist the 1st third of x(t) by 180 degrees and 3rd third by 90
degrees?
3) If I twist 2nd and 4th and quarters of x(t) by 180 degrees?
In all such cases the correlation is 0. Do we cross-fade them in the same
way?

Xue



-Original Message-
My objective has not been to find a method for automatic splicing, but
to do nice cross-fades at given splice points.

There were multiple objectives:
* Intuitive definition of the cross-fade shape. Mixing ratio as a
function of time is a good definition.
* For stationary signals, there should be no clicks or transients
produced. This is taken care of by the smoothness of the cross-fade
envelopes.
* For stationary signals, the resulting measurable transition from the
volume level of signal 1 to volume level of signal 2 should follow the
chosen cross-fade shape. This can be accomplished knowing the volume
levels of the two signals and the correlation coefficient between the
two signals.

-olli

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precise, Real Time Pitch Shift with Formant Control

2011-08-02 Thread Wen Xue
This might be purely theoretical - 
but can you pitch-shift something below 500Hz with <2ms delay at reasonable
precision? There doesn't seem to be time for the pitch to unfold itself yet,
less to a 0.5cent precision. Formant as well: at least a few cycles are
needed to obtain them.

Xue

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Conley, Dylan
Sent: 02 August 2011 13:29
To: A discussion list for music-related DSP
Subject: [music-dsp] Precise, Real Time Pitch Shift with Formant Control

Greetings,

Is anyone aware of an open source pitch-shift algorithm implementation that
is quick (< 2ms) precise (to within 0.5 cents) and leaves the formant
intact? I am working on a Gervill Soft Synthesizer implementation that is
very low latency (ASIO support at ~2ms) and takes VST or SF2 instruments.
Karl Helgason, the genius behind Gervill, implemented enough on the
wavetable side to allow for pitch correction for SF2 instruments but now I
am trying to work with VSTi's which require adjustments to the buffer data. 

I've seen a number algorithms but no mention of formant control. Of course,
I'm looking for something with a pleasant affect. Not too phasy, unnatural
or metallic sounding. I'd assume if care was taken to account for the
formant, the algorithm will leave the harmonic relationships intact but I
make a note here just in case.

I would also love to have any references to books, papers or articles that
relate to this.

Any input is really appreciated.

Cheers!
Dylan 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Multichannel Stream Mixer

2011-08-30 Thread Wen Xue
Is FramesRecorder(...) a macro or a function call? 
And why multiply vol=1.0 when everything is floating-point already? 

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 12:13
To: 'A discussion list for music-related DSP'
Subject: [music-dsp] Multichannel Stream Mixer

Hi, I am trying to implement a multi channel mixer in c++, very basic. I
have multiple streams and their associated buffer.
im using rtaudio and stk to work with the samples and audio devices.
I gather the buffers in a tick method, and fetch all the samples to produce
the sum :

FramesRecorder is the final output buffer. -1.0f to 1.0f
Frames[numplayers] are the stream buffers. -1.0f to 1.0f
int numplayers, the stream index.

the sines are normalized to -1.0 to 1.0 , float32.

for ( unsigned int i=0; i< nBufferFrames ; i++ ) {
Float vol = 1.0f;
FramesRecorder(i,0) = ( FramesRecorder(i,0) + (
Frames[numplayers](i,0) * vol ))  ; 
FramesRecorder(i,1) = ( FramesRecorder(i,1) + (
Frames[numplayers](i,1) * vol ))  ; 
}
// and something like giving the sum of streams.
FramesRecorder[0] /= numplayers;
FramesRecorder[1] /= numplayers; 


what would be the best method to mix multi channel buffers together ? I know
it's a newbie question, but I would like to know about your experience with
this. I would like to implement an efficient method as well. Thx !




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Splitting audio signal into N frequency bands

2011-10-31 Thread Wen Xue
Subtracting the LP part makes sense only if the LP filter is zero-phase. I 
believe the typical way is to directly construct a series of steep band-pass 
filters to cover the whole frequency range. This is very flexible but 
usually means the individual parts do not accurately add up to the original 
signal. On the other hand, if perfect sum is desirable you may wish to take 
a look at mirror filters, such as QMF. These are pairs of LP and HP filters 
designed to guarantee perfect reconstruction.



--
From: "Thilo Köhler" 
Sent: Monday, October 31, 2011 10:47 AM
To: 
Subject: [music-dsp] Splitting audio signal into N frequency bands


Hello all!

I have implemented a multi-band compressor (3 bands).
However, I am not really satisfied with the splitting of the bands,
they have quite a large overlap.

What I do is taking the input singal, perfoming a low pass filter
(say 250Hz) and use the result for the low band#1.
Then I subtract the LP result from the original input and do
a lowepass again with a higher frequency (say 4000Hz).
The result is my mid band#2, and after subtracting again the remaining
signal is my highest band#3.

I assume this proceedure is appropriate, please tell me otherwise

The question is now the choise of the filter.
I have tried various filters from the music-dsp code archive,
but i still havent found a satisfiying filter.

I need a steep LP filter (12db/oct or more),
without resonance and fewest ringing possible.
The result subtracted from the input must works as a HP filter.

Are there any concrete suggestions how such a LP filter should look like,
or is there even a different, better way to split the audio signal
into 3 bands (or N bands)?

I know I can use FFT, but for speed reasons, I want to avoid FFT.

Regards,

Thilo Koehler

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp]   Splitting audio signal into N frequency bands

2011-11-02 Thread Wen Xue
"Filtering a signal x(t) with a LP filter H(z) then subtract the result from 
x(t) itself" is equivalent to filtering x(t) with a filter 1-H(z), which is 
a HP filter only if H(j2*pi*f) is close to 1 in the pass band (i.e. unit 
gain and zero phase). Otherwise the result after subtraction will still 
contain substantial low-frequency components. If you want to use the 
subtraction method to split your signal then you need to have some idea of 
how much LP leak is going into 1-H(z) so that you know what outcome to 
expect.


But is there any special reason why you want to do the subtraction? If it's 
perfect reconstruction you're after then quadratic mirror filters may serve 
all right. They're usually not very steep but are stable and reasonably 
well-behaved.




--
From: "ThiloKöhler" 
Sent: Wednesday, November 02, 2011 12:09 PM
To: 
Subject: Re: [music-dsp]   Splitting audio signal into N frequency bands


Hello Thomas, Wen!

Thank you for the quick input on this.

1. I found that in the 3-band case, splitting up
the low and high band from the input and then
generating the mid band by subtracting them
works much better than the "salami" stategy
(chopping off slices with a LP).
Thanks!

2.

Subtracting the LP part makes sense only if the LP filter is zero-phase.

I dont know if my filters are zero phase, I am not that deep
into the filter math to tell you straight away. It is an IIRC taken from
here:
http://www.musicdsp.org/showArchiveComment.php?ArchiveID=259

This one seems to work best for my purposes, but that is just
from subjective listening wihtout any mathematical evidence.

Is this a butterworth filter like Thomas suggests? (sorry if the question
sounds like a noob...) In the comment they call it biquad, i dont know
if a biquad can be butterworth or this is mutual exclusive.

I have also tried:

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=266
Doesnt work well for low cutoff frequencies, like <150Hz.
I am using single precision.

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=117
Seems to be too flat, not steep enough.

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=237
Seems to be too flat, not steep enough.

I think in the use case of a mulit-band compressor, perfect
reconstruction is important. That is my I want to create
the band by subtracting and not with independent filters.
I assume this is a good strategy, no?

Regards,

Thilo


I  believe the typical way is to directly construct a series of steep
band-pass  filters to cover the whole frequency range. This is very
flexible but  usually means the individual parts do not accurately add up
to the original  signal. On the other hand, if perfect sum is desirable
you may wish to take  a look at mirror filters, such as QMF. These are
pairs of LP and HP filters  designed to guarantee perfect reconstruction.





--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-22 Thread Wen Xue
Ay, but you always anti-alias your doodle before sampling. That'll sort it 
out fine.

w.x.

-Original Message- 
From: douglas repetto

Sent: Wednesday, February 22, 2012 10:25 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] google's non-sine


I was making a bit of a joke -- no time domain signal can have two
different values at the same point in time. So since the Google doodle
isn't a proper time domain signal, there's no "correct" way to
synthesize it...

douglas

On 2/22/12 5:06 PM, Adam Puckett wrote:

Why not use something like an "inverse plotting" program (that would
stream the samples from the actual Doodle?).

On 2/22/12, douglas repetto  wrote:


That's close, Phil. But to really get it you need to find a way to
output two sample values in the same sample period -- they've got the
ellipses joined at the zero crossings!

On 2/22/12 2:25 PM, Phil Burk wrote:

I couldn't help myself. The Google waveform appears to be made of random
elliptical segments. Here is a JSyn Applet that plays the "wave doodle":



Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
... http://artbots.org
.douglas.irving http://dorkbot.org
.. http://music.columbia.edu/cmc/music-dsp
...repetto. http://music.columbia.edu/organism
... http://music.columbia.edu/~douglas

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp

links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
... http://artbots.org
.douglas.irving http://dorkbot.org
.. http://music.columbia.edu/cmc/music-dsp
...repetto. http://music.columbia.edu/organism
... http://music.columbia.edu/~douglas

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-23 Thread Wen Xue
The magic of anti-aliasing is that it is an integral - you can integrate 
over dx along any route f(x,y)=0 as long as it converges, and there will 
be an equivalent single-valued route g(x)-y=0 that gives the same 
integration result.


In the case of a circle dx will be negative half the journey so the 
equivalent route is subtracting the two branches, which makes a 
semicircle streched to twice height.


If your doodle comes in the shape of a "Z" then the convolution will 
come out as if it is a "\" (sum of top and bottom branches minus the 
middle one).


Of course you may need to rewrite the code to do such convolution but it 
follows the definition perfectly.


w.x.


on 23/02/2012 03:19, douglas repetto wrote:

To continue this very important and not at all didactic discussion:

I see some of the sections as semi-circles on either side of the middle
line. So there's no actual circle data on the dividing line, but rather
there's a point from the top circle and a point from the bottom circle
on either side of the line, and those points are on the same line on the
y axis. I don't think that even w.x.'s anti-aliasing trick will take
care of that...

This reminds me of various waveform drawing gizmos I've seen over the
years -- it's always a bit disconcerting to realize that moving to a new
x,y location has to erase whatever value was previous at that point.
You're not allowed to draw a circle! So it's kinda like drawing, but
drawing with no history or state. Maybe the Google designer was making
some sort of signal processing pun...


douglas


On 2/22/12 7:08 PM, Phil Burk wrote:


On 2/22/12 2:25 PM, douglas repetto wrote:

I was making a bit of a joke -- no time domain signal can have two
different values at the same point in time. So since the Google doodle
isn't a proper time domain signal, there's no "correct" way to
synthesize it...

Haha. It sure looks impossible from the doodle.

But the ellipses only have infinite slope at the zero crossing. So all I
have to do is output a single 0.0. If I am on either side of the zero
crossing then the slope is non-zero and it acts like a proper function.

Whether the Google doodle is actually ellipses is open to interpretation.

Phil
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] WOLA and the phase vocoder

2012-04-20 Thread Wen Xue

Pre- and post- windows do not have to be identical.

Post-windowing is more about eliminating discontinuities at the ends of 
a frame. It has nothing to do with DFT so one doesn't care about the 
spectral qualities.


I use Hamming forpre-windowing and {Hann divided by Hamming} for 
post-windowing. That guarantees good analysis and smooth OLA, as well as 
constant envelope if the window-size/hop-size ratio is multiple of 2.


Xue

On 20/04/2012 14:23, Domagoj Šarić wrote:

As any tutorial/paper/book will teach you, one should apply the window
of choice both before an FFT and after an IFFT (in order to
smooth/taper to signal that might have gone wild due to frequency
domain modifications). The problem is of course that this amounts to
applying a squared window instead of the original window (which might
no longer satisfy the COLA requirement). The standard answer/solution
presented here http://www.dsprelated.com/dspbooks/sasp/Choice_WOLA_Window.html
is to first take the square root of the window. This (taking the
square root) however "deforms" the original window which no longer has
its original spectral "qualities" and it thus, for example, causes
significantly worse phase vocoder performance...
Is there a smarter solution? :)


--
"What Huxley teaches is that in the age of advanced technology, spiritual
devastation is more likely to come from an enemy with a smiling face than
from one whose countenance exudes suspicion and hate."
Neil Postman
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] WOLA and the phase vocoder

2012-04-20 Thread Wen Xue
The sine window is exactly the square-root of the Hann. (I've seen Hann 
window called square-sine window in some textbooks.)


M or M-1 determine the actual window size (time taken by the window function 
to go from 0 to 0, not to be confused with DFT size). If you prefer to use 
same window and DFT sizes, then use M. If you use M-1 then don't forget the 
window function is 0 outside the bounds (therefore either window[0] or 
window[M-1] must be 0, or both).


When a perfectionist plans to do 50% or 75% overlap-add, he may want the 
exact half or quarter of this size (M or M-1) be an integer, because that's 
the position to place the next frame. He probably can't hear any difference 
between 50% overlap and 49.95%, though.


Xue

-Original Message- 
From: Dave Hoskins

Sent: Friday, April 20, 2012 9:37 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] WOLA and the phase vocoder


Have you tried a sine window for both analysis and syntheses yet?
It's used in MPEG according to here:-

https://ccrma.stanford.edu/~jos/sasp/MLT_Sine_Window.html


Although I think it's supposed to be 'M-1' instead of M, I'm not sure -
anyone?
Some C... ;)

for(int n = 0; n < M; n++ )
{
window[n] = sinf(((float)n+0.5f) * PI / (float)(M-1));
}

Dave.




On 20/04/2012 14:23, Domagoj Šarić wrote:

As any tutorial/paper/book will teach you, one should apply the window
of choice both before an FFT and after an IFFT (in order to
smooth/taper to signal that might have gone wild due to frequency
domain modifications). The problem is of course that this amounts to
applying a squared window instead of the original window (which might
no longer satisfy the COLA requirement). The standard answer/solution
presented here 
http://www.dsprelated.com/dspbooks/sasp/Choice_WOLA_Window.html

is to first take the square root of the window. This (taking the
square root) however "deforms" the original window which no longer has
its original spectral "qualities" and it thus, for example, causes
significantly worse phase vocoder performance...
Is there a smarter solution? :)


--
"What Huxley teaches is that in the age of advanced technology, spiritual
devastation is more likely to come from an enemy with a smiling face than
from one whose countenance exudes suspicion and hate."
Neil Postman
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Window presum synthesis

2012-04-23 Thread Wen Xue
Time-aliasing is just another formulation of delay-add. If you look at the 
definition of convolution y=x*h in terms on y(t)=... then it's clearly 
time-aliasing. In your example it's a convolution with a pulse train where 
the pulse period is K=N/4, provided the same treatment is applied to all 
frame (even if not, you may still view it as frame-variant convolution). I 
haven't read the article or book so can't guess the details, but generally 
speaking "intentional" time-aliasing isn't something awkward if applied 
properly.


Xue

-Original Message- 
From: s...@sfxmachine.com

Sent: Saturday, April 21, 2012 12:30 AM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Window presum synthesis

Alessandro Saccoia  wrote:

http://web.archive.org/web/20060513150136/http://archive.chipcenter.com/dsp/DSP000315F1.html
The images haven't been archived, but you could still find it a useful
reference.


This link includes the images:
http://web.archive.org/web/20010210052902/http://www.chipcenter.com/dsp/DSP000315F1.html

This method is also discussed in Crochiere & Rabiner's Multirate Digital
Signal Processing book, but it didn't make sense to me there either - I'm
assuming this is my problem, not theirs. Apparently this method windows
the input with a window of size N = 4K, then intentionally time-aliases
the signal by stacking and adding it in blocks of K samples, then takes
the FFT of the time-aliased sequence. On the synthesis side, it takes the
inverse FFT, periodically extends the result, applies a synthesis window
and overlap adds. The periodic extension is the transpose of the windowing
and aliasing in the analysis process, which fixes everything somehow...?

I'm afraid to try this, because it doesn't make any damn sense, and if it
works it might make my brain explode. Supposedly, if the lengths of the
analysis and synthesis windows are <= the size of the transform, this
simplifies to the basic Crochiere, Griffin/Lim WOLA method we all know and
love.

I'm curious what they're on about with this, but not quite curious enough
to try it, since it can't possibly work unless it does. Maybe it yields
perfect reconstruction so long as you don't listen to the output. Anyway,
I'm hoping someone will tell me it sounds great and makes everything all
better in the time, frequency, and efficiency domains.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Window presum synthesis

2012-04-23 Thread Wen Xue
Oh, that's because the FT of REAL signals are symmetric. Both time and 
frequency aliases are circular without change of order. When the symmetric 
part wraps around it makes the "inverted" illusion. If the frequency-domain 
form is real (e.g. with cosine transforms) inverted time alias can be 
observed, a similar illusion.


-Original Message- 
From: Domagoj Šarić

Sent: Monday, April 23, 2012 11:40 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Window presum synthesis

On 23 April 2012 10:52, Wen Xue  wrote:

Time-aliasing is just another formulation of delay-add. If you look at the
definition of convolution y=x*h in terms on y(t)=... then it's clearly
time-aliasing. In your example it's a convolution with a pulse train where
the pulse period is K=N/4, provided the same treatment is applied to all
frame (even if not, you may still view it as frame-variant convolution). I
haven't read the article or book so can't guess the details, but generally
speaking "intentional" time-aliasing isn't something awkward if applied
properly.


Well, the additional "problem" I have with the method is that although
it is sometimes called time domain aliasing it does not actually
"look" like aliasing because the added frame is not mirrored/reversed
before being added. IOW sample n + 1 (the first sample of the second
frame) is not added to sample n - 1 (the sample preceding the last
sample of the first frame) but to sample 1 (the first sample of the
first frame)...


--
"What Huxley teaches is that in the age of advanced technology, spiritual
devastation is more likely to come from an enemy with a smiling face than
from one whose countenance exudes suspicion and hate."
Neil Postman
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Window presum synthesis

2012-04-24 Thread Wen Xue
Yes, it's very right that you can't recover a and b from a+b alone - half 
the information is missing.


However, if we remember the frames overlap, things are a little different. 
Say you also know b+c, then to guess a, b and c from a+b and b+c, only 1/3 
information is missing. If you know all the way through to y+z, then only 
1/26 is missing, from that you can make a pretty good guess at a--z. Perfect 
recovery can be achieved with a minor trick, e.g. fade-in your sequence so 
that a=0.


Whether or not the application tries to recover a--z is another matter. Even 
if it doesn't, a--z plus a reverb probably sounds fairly like a--z. But if 
it wants to, it can.


X.

-Original Message- 
From: Charles Henry

Sent: Tuesday, April 24, 2012 6:19 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Window presum synthesis

On Mon, Apr 23, 2012 at 2:57 AM, Domagoj Šarić  wrote:

On 20 April 2012 17:15, Charles Henry  wrote:

Don't let it bother you too much.  I can tell by looking at it--This
is a stupid algorithm.


I sort of regret those words--it just seems so basic in terms of math
that I don't see much about it that's remarkable.


It does seem strange and counterintuitive at first glance but it's
hard to just simply dismiss it thus once you've seen it examined in
several respectable books (this http://hdl.lib.byu.edu/1877/etd157 is
also an often referenced ~300 page paper dedicated solely to the
subject in question) and especially once you've _heard_ it (the free
Richard Dobson's open-sourced pvoc effects).


Okay--just don't call it "more precise" when I'm listening or you'll
get my opinion :)  PVOC is probably a good application for these
averaged FFT's.  The reconstructed signal only needs some resemblance
to the original signal.


This doesn't give you greater precision in the frequency domain, it
just makes the results more localized to the center of the interval in
the time domain.  It smooths out the response a bit, but this is
really a *loss* of precision.


Exactly, and this is precisely what the technique tries to accomplish:
to still give you greater subband rejection but without the (or with
reduced) frequency detection precision...


- fold them to N time domain samples (i.e. simply add the first and
second half of the input data)


When you do this, you can no longer reconstruct the original spectrum.
 You don't know which interval the values come from.  This is sort of
like averaging the spectrums of adjacent N-point FFTs.


Obviously (i.e. by listening to Dobson's results) you somehow can :)


When I say reconstruction--I mean exact reconstruction mathematically.
I don't see it here.


- take an N point FFT
- do some processing with the "more precise spectrum"


An equivalent algorithm: apply the windowing on 2N, FFT, then throw
away each odd-numbered sample of the result.  (I'll leave it to you to
see why this is true--use the un-normalized DFT definition).


I know, that's exactly how Frederic Harris explains the idea in
chapter 8 (Time Domain Signal Processing with the DFT) in the
"Handbook of Digital Signal Processing - Engineering Applications
[Elliot, 1987]"...



Then, ask the authors why they think it's valuable to throw away half
of their samples and make it so you can't reconstruct the original
signal :)


Because "half their samples" are not valuable, they are (made)
redundant by the windowing procedure (i.e. the wide main lobe of the
window used covers several bins which thus carry duplicated
information). IOW this procedure tries to do something very similar as
zero padding the FFT but without the use of larger FFTs...


Zero-padding preserves the number of dimensions.  It's a function from
R^m to R^n where m
- take an N point IFFT
- and now what? :) we've got N time domain samples that correspond to
the folded input samples...I can't imagine it would sound good if this
is simply window-overlap-added and sent to output as is...


I can't imagine why either.


And yet, and yet... :)


It's just whether the technique is appropriate for the task at hand.
I was also reading your other thread--but I didn't understand these
threads were related.

Chuck
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Window presum synthesis

2012-05-16 Thread Wen Xue
The a-z analogy is just my invention. Any linear algebra book will explain 
more than that and the closest example algorithm is the pseudo inverse.


As for the phase vocoder, there is probably just no need to clear the presum 
up. The  pv is a time-variant reverb; the presum is a time-invariant. Now 
that you have to bear with one reverb already, the other becomes easier to 
swallow. Even beneficial sometimes, because if the pv reverb is large enough 
to be an echo (double transients that is), more reverbs can smear it up.


w.x.

-Original Message- 
From: Domagoj Saric

Sent: Monday, May 14, 2012 10:36 AM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Window presum synthesis

Hi, everyone, apologies for the delay...was on a short vacation ;)


On 25 April 2012 02:22, Wen Xue  wrote:

Yes, it's very right that you can't recover a and b from a+b alone - half
the information is missing.

However, if we remember the frames overlap, things are a little different.
Say you also know b+c, then to guess a, b and c from a+b and b+c, only 1/3
information is missing. If you know all the way through to y+z, then only
1/26 is missing, from that you can make a pretty good guess at a--z. 
Perfect

recovery can be achieved with a minor trick, e.g. fade-in your sequence so
that a=0.

Whether or not the application tries to recover a--z is another matter. 
Even

if it doesn't, a--z plus a reverb probably sounds fairly like a--z. But if
it wants to, it can.


Thanks, clever insight :)
I haven't fully wrapped my head around the procedure in the OLA case [i.e. 
how
to do it in real time when you have a stream of partially overlapping frames 
(a,

b, c, ..., z) w/o having to remember a large amount/all of the signal's
history], I can only guess that the sinc pass tries to do/approximate just 
this...
In addition, even though I can see how this procedure/idea could work with 
an
unmodified signal I'm not so sure how it would work in the general/phase 
vocoder

case:
 - if a' marks a modified frame a (i.e. frame a that went through a PV and 
was

for example pitch shifted)
 - if we 'turn on' "window presum" (with the window twice the size of the 
DFT),

then we no longer pass single frames through the DFT and PV but rather time
aliased frames: IOW we have a+b on input and (a+b)' on output
 - now it is no longer clear (to me) how we could get the stream a', b', 
c'...
from the (a+b)', (b+c)', (c+d)'... stream (especially if we consider that 
the PV

modification can change between each frame, e.g. the pitch shift amount)...


Do you perhaps know a paper/book/... that explains/explores this a--z idea 
in

more depth (with some nice pseudo code of course ;D)?



Domagoj Saric
Software Architect
www.LittleEndian.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DC blocking (again :)

2012-07-31 Thread Wen Xue
5ms moving-average doesn't sound very right for it cuts off anything below 
200Hz, no matter how much one upsamples it. However it is probably "just 
fine" to subtract a DC measured 500ms ago from the current waveform because 
the DC shouldn't change much in that time or it can't be DC. This 
subtraction you can write as a anti-DC filter which is FIR with a big delay 
on the unwanted part but no delay on the wanted part. There might be some DC 
left - but probably significantly suppressed - depending on how DC-ish it 
is.


Xue

-Original Message- 
From: Domagoj Saric

Sent: Tuesday, July 31, 2012 9:45 AM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] DC blocking (again :)

On 30.7.2012. 20:51, robert bristow-johnson wrote:
i didn't have anything to do with the subtract-the-moving-average DC block 
filter.


I apologize...at least I attributed too much rather than too little ;)



if you can put up with delay (which is what you must for a causal and
linear-phase filter), you can subtract the moving-average from the sample 
in the

middle of the buffer, not the most current sample.


Well, for a live meter delay is obviously very undesirable. Taking into 
account
the "laziness" of human senses, I guess up to ~5ms might be tolerable, 
that's
about 220 samples@44.1kHz. If the DC filter is placed after the upsampler 
(as

they seem to imply in the standard) and we upsample by a factor of 8 that
becomes ~1760 samples...would that be enough for real-world DC offset 
tracking?


But, more importantly, this might not be needed at all because, as I pointed 
out

in my first mail, "they" (the ITU-R and EBU standard "developers") obviously
think/imply that using a plain IIR DC blocking filter is "just fine" (and 
one
would certainly expect the standard not to err in such fundamentals, 
especially
considering the amount of people that worked on it). The question, again, is 
how

(can it be "just fine")? Unless the answer is in the JOS link (the near
zero-phase)..?


--
Domagoj Saric
Software Architect
www.LittleEndian.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DC blocking (again :)

2012-08-01 Thread Wen Xue
Ah sorry I missed that. Yes, pretty much the same. And this has to be 
minimal-phase although there's no attempt to make it so. And no, this is no 
linear phase BUT if the moving average filter has low sidelobes then it is 
ALMOST linear phase in audible range, because there's hardly any subtraction 
there. This applies to FIR as well as IIR filters as long as the DC-blocking 
is constructed by subtraction. If the moving average is FIR then the whole 
DC-blocking is FIR; if that is IIR then so is this. Regarding 
linear-phaseness their outcomes are the same in audio range, as long as the 
moving average output is kept below.


Xue

-Original Message- 
From: Domagoj Saric

Sent: Wednesday, August 01, 2012 9:39 AM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] DC blocking (again :)

On 31.7.2012. 12:54, Wen Xue wrote:

5ms moving-average doesn't sound very right for it cuts off anything below
200Hz, no matter how much one upsamples it. However it is probably "just 
fine"
to subtract a DC measured 500ms ago from the current waveform because the 
DC
shouldn't change much in that time or it can't be DC. This subtraction you 
can
write as a anti-DC filter which is FIR with a big delay on the unwanted 
part but

no delay on the wanted part. There might be some DC left - but probably
significantly suppressed - depending on how DC-ish it is.


Is this the same/similar to what I mentioned in the first post:
<<
Then there is the modification
(http://www.dsprelated.com/showmessage/80739/2.php, Andor's post) to 
subtract
the moving average from the _current_ sample (instead of the one 
corresponding
to the middle of the moving average filter) but this supposedly makes the 
filter
minimal-phase instead of linear-phase so it is still a no go (at least 
AFAICT

from my limited knowledge).



?


--
Domagoj Saric
Software Architect
www.LittleEndian.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Software to make a binaural simulation of the movement of a sound source

2012-10-10 Thread Wen Xue
I believe OpenAL does this. Maybe DirectSound as well. The angular precision 
is the tricky point - one can't achieve much without knowing a lot about the 
hardware.


-Original Message- 
From: joaoandrefe...@sapo.pt

Sent: Wednesday, October 10, 2012 6:08 PM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Software to make a binaural simulation of the movement 
of a sound source


Hello all,

I don't know if this is the ideal place to do it, but I've been
advised to ask what I'm searching for in this mailing list. I need to
know if someone knows (or knows somebody who can indicate me) a
software for making a binaural simulation of the movement of a sound
source - with precision on the order of a few degrees - or,
alternatively, which allow the simulation of a virtual space in which
the location of sound sources can be chosen . Does anybody has a hint
on this?

If someone can help, I'll be deeply appreciated.

Thanks in advance,
João Fernandes
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] stuck with filter design

2012-11-19 Thread Wen Xue
As far as I could remember, with sampled signals we always try to forget 
values *between* samples, for they are always uniquely determined by values 
*at* the samples, so if we get these right, those must be right as well.


The problem with mimicking an analogue filter with digital is that the 
analogue one is never band-limited below Nyquist frequency, so it doesn't 
have a digital equivalence at all. Some textbooks include impulse invariance 
filter design as an alternative to bilinear transform, which would've given 
the perfect digital equivalence had the analogue prototype been 
band-limited. But with rational transfer functions this never happens.


Xue

-Original Message- 
From: Theo Verelst

Sent: Monday, November 19, 2012 9:43 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] stuck with filter design

Remember the main rules:

Sampled signals can be powerfully processed are nicely fixed (no
"analog" noise" and the bits and words specify exact signals), but
sampling theory must be understood to enforce some main limitations: the
signals of course must have no higher frequency components than half the
sampling frequency, and to get back the "original" signal (as it was
before sampling the analog signal into a digital set of samples), the
reconstruction filter is quite complicated when high accuracy is needed.

So no matter which digital filter equivalent is used to mimic analog
filters, getting proper signal values *between* samples requires quite a
complicated computation (adding a lot of "sinc" functions). All filters
being designed without taking this int consideration (and some do) are
going to sound similar in the sense that they use the fixed delay
between subsequent samples to form delay elements, and that is audible
(and measurable in the properly formed self-correllation signal of the
output).

So no matter whether you use equivalency of digital filters with analog
filter networks (which in the linear case is a well defined part of
"Network Theory"), and the mathematical design tools for electronic
amplifiers (the bode diagrams and such like integral function theory),
the digital filters are going to have properties not easily put
completely in line with their analog "equivalent".


Moreover, it is hard to *at any rate* make some sort of perfect filter,
be it analog, digital, with fourier transforms, etc, only the
complicated (well known, continuous, so NOT the FFT) Fourier Theory can
predict the workings of analog and well made digital filters, and say
theoretically (and 100%) accurate how certain filters will behave with
given signals and of course there are since the beginnings of radio all
kinds of books on how to design certain branches of the filter design tree.

So, are there perfect orthogonal filters, for instance? Yes, but
unfortunately most of them are very complicated to be theoretically all
correct (like in theoretical physics), and all of the digital filters
are highly causal in the sense of costing time to compute and to
reconstruct the correct (emphasis on correct) analog signal. In theory
most perfect filters take close to forever to compute, so to engineer
some great filters, usually the theoretical limitations are quickly in
sight, and even a lot of hard work isn't going to create a
communications receiver or a great audio synthesizer, ever. Even though
of course those jobs *can* be done!


T.Verelst
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Ghost tone

2012-12-06 Thread Wen Xue
I played it out of my laptop speaker and picked it with my laptop mic. 
Surprisingly (or maybe not for some) the second half comes back some 5 times 
stronger in partial amplitudes than the first half. I have not observed any 
additional harmonic. I assume that shows speaker distortion is not an issue 
here.


I don't hear them as pitched sounds. The first sounds a pulse train and the 
second sounds a noise repeating itself. Though man is expected to hear down 
to 20hz I seem to remember that pitch perception doesn't go that deep.


For human ears the usual auditory model uses autocorrelation. With that 
you'll have a 30hz entry with or without a 30hz tone.


Xue

-Original Message- 
From: Didier Dambrin

Sent: Thursday, December 06, 2012 10:27 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

Mmmh can you explain why it's there (where it's from, I mean, this would
mean that out of the same harmonics, just with different phase relationship,
very low tones could be produced?), & how to see it?

I got another reply suggesting it's due to the ear's compressor, & that
seems more belieable, also explains why it doesn't happen with the other,
more continuous version. The gap between peak would suggest a tone around
30hz, which could really be it. It would also imply that the ear's
compression has a very short attack/release time, for the "compression
envelope" pulsating fast enough to be in the audible range.




-Message d'origine- 
From: Thomas Young

Sent: Thursday, December 06, 2012 11:20 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

1) The low frequencies are audible
2) It's not speaker distortion, the low frequencies are present in the
signal

I think the spectrum of the first signal can be a bit misleading, if you are
a bit more selective about where you take the spectrum (i.e. between the
asymptotic sections) the low frequency contribution is easier to see.

The unpleasant "pressure" effect is exactly that, sound pressure waves. The
strength will be dependent on the acoustics of your environment, it will be
particularly objectionable if your ears happen to be somewhere where a lot
of the wavefronts collide. The proximity of headphones to your ears is no
doubt exacerbating the effect, especially in the very low frequencies which
would otherwise bounce all over the place and diffuse.

Mics generally won't pick up very low frequencies - or more accurately their
sensitivity to lower frequencies is very low.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 06 December 2012 05:50
To: A discussion list for music-related DSP
Subject: [music-dsp] Ghost tone

Hi,

Here's something to listen to:
http://flstudio.image-line.com/help/publicfiles_gol/GhostTone.wav


It's divided in 2 parts, the same bunch of sine harmonics in the upper
range, only difference is the phase alignment. (both will appear similar
through a spectrogram)

Disregarding the difference in sound in the upper range, 1. anyone confirms
the very low tone is very audible in the first half?
2. (anyone confirms it's not speaker distortion?) 3. anyone knows about
litterature about the phenomenon?

While I can understand where the "ghost tone" is from, I don't understand
why it's audible. I happen to have hyperacusis & can't stand the low traffic
rumbling here around, and I was wondering why mics weren't picking it, as I
perceive it very loud. I hadn't been able to resynthesize a tone as nasty
until now, mainly because I was trying low tones alone, and I can't hear
simple sines under 20Hz.
The question is why do we(?) hear it, why is so much "pressure" noticable
(can anyone stand it through headphones? I find the pressure effect very
disturbing).
Strangely enough, I find the tone a lot more audible when (through
headphones) it goes to both hears, not if it's only left or right.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2012.0.2221 / Base de donnees virale: 2634/5439 - Date: 05/12/2012

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

Re: [music-dsp] Ghost tone

2012-12-06 Thread Wen Xue
It's easy to imagine a 1000hz and a 1200hz generate a 200hz, or a 1200hz and 
a 1400hz generate another 200hz, for there is the common divisor. But, say, 
will a 500hz and a 900hz generate a 400hz?


xue

-Original Message- 
From: Richard Dobson

Sent: Thursday, December 06, 2012 1:27 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

A "beat" is a small difference frequency, e.g. 1Hz. It will appear on
the scope as a slowly amplutide modutaed signal. High C and E are
harmonics of of middle C (specifically,  4th and 5th), the difference
between those frequencies give the frequency of the "difference tone".
"beat" and "difference" are one and the same, but in the vernacular we
only apply the word "beat" to very small differences which become heard
as tremolo. Literally any two tones presented together will generate a
difference tone. I can't say whether that arises specifically from
non-linearities in the ear, but most certainly it does rise in the ear;
and of course is intensity-dependent.

Both C-E and E-G will generate the same difference tone (middle C) as
the differences between the two frequencies (in just intonation!) the
same.  Given 500 partials 30hz apart, they will all conspire together to
generate the mother of all difference tones, reinforcing each other.

In fact, it often happens thet we hear multiple difference tones; it is
almost a recursive process. So given a large number of partials, the
"result" may well be pretty complex, as each partial is different from
each other by some amount or other.

In the case of beats, as those closely separated tones diverge, the beat
frequency rises, until that point (whose proper name I forget offhand)
where the effect degrades to an uncomfortable degree, until eventually
we are aware of two distinct pitches. That trasnisiotn point corresponds
to the traditional lower limit of human pitch perception, e.g. around 20Hz.

If a sound is very loud, it drives the ear into even more non-linear
regions, and if sustained may cause damage. There is a whole range of
frequencies which are peceived as  lower or higher according to how loud
they are (you can test this with any sinusoid and a level control), so i
am happy to presume that one way or another the ear is non-linear rather
a lot of the time!

Richard Dobson



On 06/12/2012 13:09, Didier Dambrin wrote:

But in your example, the beating (which I still don't consider as a tone
but that doesn't matter) comes from the tuning of C & E, while in the
audio file all of the partials are harmonics, there are no 2 periods
against each other.
More, they are the same in both halves, the only difference is in the
initial phases, and that's enough to make the ghost tone completely go -
you would never make the beating you describe disappear by toying with
initial phases, as the periods (whether it's 1 big pulse or random
cycle) are exactly the same.
Really, the phenomenon here is not beating IMHO.


-Message d'origine- From: Richard Dobson
Sent: Thursday, December 06, 2012 1:44 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

There is no reason a resultant tone (or difference tone) has to be low.
I demonstrate this with my flute students all the time in the context of
tuning. Play high C and E on two flutes, say, and the resultant tone
will be clearly heard, hovering around middle C depending on how the
interval is tuned (I lip-bend the E to show the effect especially
vividly). Needless to say, in strict equal temperament, that difference
tone is horribly sharp. A difference tone of this kind is in effect a
"beat frequency" fast enough to be heard as a pitch (a form of amplitude
modulation). An orchestra generates difference tones all the time, most
obviously from the wind and brass sections, and a standard part of the
technique of classical orchestration is to avoid loud small intervals in
the low register, as that will produce the archetypal "muddy" sound.

Setting aside any individual or non-normative differences in auditory
perception, such difference frequencies will always be present (but
might conceivably be masked by other sounds, and of course the
generating notes have to be loud to get the effect clearly).

Record producers even depend in it; music played over very small
speakers outputs little or no true low frequency content; but the ear is
remarkably good at reconstructing virtual bass from the higher partials
that do get reproduced.

On the other hand, the ear has a remarkable ability to block out static
sounds, especially where it is not expecting to experience any, such a
as VLF; the sense of the resultant tone might be easier to pick up if
the pulse wave was generated with vibrato (VLF pitch modulation).

One aspect of psycho-acoustic masking that applies here is that loud
high frequencies cannot mask low frequencies, but a loud low frequency
can easily mask higher frequencies.

The phase-shifted version of the pulse wave c

Re: [music-dsp] Ghost tone

2012-12-06 Thread Wen Xue
My hypothesis is that phase is not a perceptual dimension in itself. We only 
hear phase indirectly through other directly perceivable attributes, such as 
frequency, amplitude and placement. The two sounds here sound different 
primarily because they have distinct amplitude envelope. We hear "phasiness" 
is phase vocoder because the handling of phase shifts the underlying 
frequency.


In fact 30ms is a typical time scale we measure and manipulate the phase 
with. If we reverse every 30ms frame in a speech we hear much the same 
speech if artificial. If I stick to my hypothesis then it's not the time 
scale that decide whether or not we hear the phase either. It's what the 
phase effects on what we can hear that matters.


There is even an evolutionistic explanation to it: to survive generations we 
need to pick up those sound attributes that are most informative and least 
corruptible by natural filters (floor reflection, etc.). These are loudness, 
frequencies, formants, impacts, source direction, and the like, which tell 
him there is a large boar behind the tree. Phase he ignores.


xue

-Original Message- 
From: Russell McClellan

Sent: Thursday, December 06, 2012 3:41 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

I'm no psychoacoustician, but I do know that the ear doesn't
completely disregard phase on the scale of a few milliseconds (in this
case your signal has a period of around 30 milliseconds).  You can't
make the argument that they should sound the same because there's only
a phase difference when you're talking about that sort of time scale.
There's "only a phase difference" between any vocal sample and the
same signal reversed in time.  Certainly most of the time we can tell
the difference.

Remember that the most temporally accurate drummers are more accurate
than 30 milliseconds.

So, I don't buy your dismissal of the missing fundamental effect on
the ground that the two signals sound subjectively different.  By all
rights, phase differences should matter at that time scale!

The missing fundamental effect is incredibly well known and well
documented, and it perfectly explains what is going on.  I don't think
you're being fair to the idea.

BTW, subjectively I hear tons of low frequency content in the second
signal as well.  It sounds very strange, kind of like white noise
through a sample and hold triggered at a frequency of around 30hz.

Thanks,
-Russell
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number ofsignals

2012-12-10 Thread Wen Xue
To only way to guarantee precision is to use enough bits for intermediate 
results. Given your running sum formulation, the worst-case quantization 
error for any N is


0.5*Pi + 0.25*Pm*(N+2)(N-1)/N

where Pi is the precision of inputs (the summed signals) and Pm is that of 
the partial sum (notice it's the precision, not the error). In theory you 
want to keep this below 1/2 of the desired output precision.


How you do it is highly platform-dependent so there's no universal solution. 
Usually we do not like a lot of multiplications and divisions, for speed 
rather than precision reasons (again, that is platform-dependent too). 
Personally I prefer storing the sum and count separately, then do the 
division when the result is read. But this is not going to help with the 
precision if the intermediate result (sum in this case) is given the same 
number of bits.


xue

-Original Message- 
From: Alessandro Saccoia

Sent: Monday, December 10, 2012 9:41 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Precision issues when mixing a large number 
ofsignals




I don't think you have been clear about what you are trying to achieve.

Are you trying to compute the sum of many signals for each time point? Or 
are you trying to compute the running sum of a single signal over many 
time points?


Hello, thanks for helping. I want to sum prerecorded signals progressively. 
Each time a new recording is added to the system, this signal is added to 
the running mix and then discarded so the original source gets lost.
At each instant it should be possible to retrieve the mix run till that 
moment.




What are the signals? are they of nominally equal amplitude?



normalized (-1,1)

Your original formula looks like you are looking for a recursive solution 
to a normalized running sum of a single signal over many time points.


nope. I meant summing many signals, without knowing all of them beforehand, 
and needing to know all the intermediate results





I could relax this requirement, and forcing all the signals to
be of a given size, but I can't see how a sample by sample summation,
where there are M sums (M the forced length of the signals) could
profit from a running compensation.


It doesn't really matter whether the sum is accross samples of a signal 
signal or accross signals, you can always use error compensation when 
computing the sum. It's just a way of increasing the precision of an 
accumulator.




I have watched again the wikipedia entry, yeah that makes totally sense now, 
yesterday night it was really late!





Also, with a non linear
operation, I fear of introducing discontinuities that could sound
even worse than the white noise I expect using the simple approach.


Using floating point is a non-linear operation. Your simple approach also 
has quite some nonlinearity (accumulated error due to recursive division 
and re-rounding at each step).


I see. cheers

alessandro



Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Mmultichannel pitch detection?

2013-01-16 Thread Wen Xue

Hi Danijel,

I don't think there's a "standard" way to it, for people take different 
approaches to suit their detectors and the signals. But roughly there are 
three schemes:


1. Adapt the pitch detector to multi-channel inputs. In a Bayesian detector, 
for example, this is done by replacing the conditional distribution 
P(input|pitch) with the multi-channel equivalent P(input1, input2|pitch). 
But not all detectors have such a straightforward multi-channel extension.


2. Run the mono-channel detector on each channel and combine the results. 
Usually the one-channel results are taken from one step before the last, 
including a list of candidate pitches and their confidence measures, so you 
can choose the one with the best confidence across all channels.


3. Run the mono-channel detector on a down-mix of multi-channel input. The 
mixing may involve a small delay to each channel to maximize their 
correlation. There's always a chance to mix-away the intended pitch, though.


In any case you may wish to ignore the subwoofer.

xue

-Original Message- 
From: Danijel Domazet

Sent: Wednesday, January 16, 2013 9:53 AM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Mmultichannel pitch detection?

Hi all,
We implemented a pitch detection algorithm, and it works nicely for single
channel mono input. Now we need to estimate single pitch for multi-channel
input, for 5.1 surround for example. How should this be done, are there some
"standard" ways of doing it?

Any pointers welcome...


Danijel Domazet
LittleEndian.com


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] ITU 1770 RLB filter coefficientsand biquad IIR filter

2013-01-16 Thread Wen Xue
It seems with bilinear transforms the back-and-forth can be omitted. Because 
eventually one always transforms 1 to 1 and -1 to -1 in the z-plane,  the 
only thing left for bilinear transforms is replacing z by some (az+1)/(z+a). 
That "a" can be found out by matching frequencies on the unit circle.



-Original Message- 


On Wed, 16 Jan 2013 06:07:51 -0500, robert bristow-johnson wrote:


if i were to try to re-calculate the coefficients, i would first factor
out the constant gain, then factor both numerator and denominator into
discrete-time poles and zeros.  then map those poles and zeros back to
analog poles and zeros using, i suppose the inverse bilinear transform
(with warping).  then re-transform back with the bilinear transform with
the new sampling rate.

i dunno.  that's how i might approach it.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Calculating the gains for an XY-pad mixer

2013-01-18 Thread Wen Xue
Somehow I feel it's the correlated case that deserves more attention. Things 
being "uncorrelated" simply means their correlation coefficients are zero; 
but things being "correlated" these can be anything from -1 to 1 but zero. 
You probably don't want to handle all these cases with a same set of gains.


-Original Message- 
From: Aengus Martin

Sent: Friday, January 18, 2013 5:27 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Calculating the gains for an XY-pad mixer

I have no particular reason for doubting the correlated case, but I
suppose I wanted confirmation that the uncorrelated case is indeed
that straightforward, and to know if for some reason there might be a
better way of doing it.


On Fri, Jan 18, 2013 at 4:22 PM, Ross Bencina
 wrote:

On 18/01/2013 4:06 PM, Alan Wolfe wrote:


What you are trying to calculate is called barycentric coordinates,



Actually I don't think so.

Barycentric coordinates apply to triangles (or simplices), not squares 
(XY).


http://en.wikipedia.org/wiki/Barycentric_coordinate_system_(mathematics)
http://en.wikipedia.org/wiki/Simplex


Aengus wrote:

Do these seem like reasonable ways to get the gains for the two cases?


They seem reasonable to me. Do you have a reason for doubting?

Ross

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, 
dsp

links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp




--

www.am-process.org
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Overlap-add settings for pitch detection?

2013-01-22 Thread Wen Xue

Hi Danijel,

the choice of window size has much to do with what your pitched source is 
like. Usually the window should be long enough to include plenty of cycles 
to fight against noise, but short enough to finish before any dramatic pitch 
change. So if the pitch is known to be stable, the window can be very long. 
If not, the ideal window size will depend on how fast the pitch changes. For 
periodical pitch modulation the "effective" frame size is expected to be no 
more than 1/10 of the modulation period; so for a voice modulated at 6Hz the 
"effective" window size should be 16ms or less. Here "effective" refers to 
the central part of the window where most energy are located. The actual 
window size will be slightly (1.4x~2x) longer.


The overlap rate is very often 1/2 of the window size. Higher overlaps are 
more frequently used for synthesis than for analysis. Where time resolution 
is critical you can reduce the window size locally.


It rarely matters much which window function to use. Both Hann and Hamming 
are good. But if your pitch detector does differentiations, Hann might be 
the preferable one as it differentiates naturally at both ends.


Xue

-Original Message- 
From: Danijel Domazet

Sent: Tuesday, January 22, 2013 2:23 PM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Overlap-add settings for pitch detection?

Hi music dsp,
In order to implement accuarate pitch detection we are sending input signal
through Fourier analysis stage. Are there any recommended settings with
regards to:
- Window frame size?
- Window overlap factor?
- Window type (Hamming, Hann, etc.)?


Thanks.

Danijel Domazet
LittleEndian.com



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Overlap-add settings for pitch detection?

2013-01-24 Thread Wen Xue

Hi Danijel,

Hamming and Hann windows have the same frequency selectivity: both have the 
main lobe at 4 bins wide. For most "smooth" windows the frequency resolution 
is roughly reciprocal to the "effective" window size. So if you find one 
window has a better resolution than another at same size, it's very likely 
the result of its being bulky (compare Hamming with Blackman, for example). 
Kaiser and Gaussian windows allow you tune the balance between the bulk in 
time and the bulk in frequency. But we like Hamming and Hann, because they 
are simple, and 4 bins main lobe seems a good compromise. To further slim it 
down you'll need swell up the window, pushing it towards the rectangular - 
and we all know what's lurking that way.


The differentiations are often related to estimating frequencies. Say you 
have a sinusoid, if you differentiate it, it's amplified by its frequency. 
This can be arranged properly with windowed DFTs and give fairly accurate 
frequencies for the fundamental as well as harmonics, but prefers the window 
be differentiable.


Best,
Xue

-Original Message- 
From: Danijel Domazet

Sent: Thursday, January 24, 2013 12:54 PM
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Overlap-add settings for pitch detection?

Thanks Alexander.
Thanks Xue.

Xue, you say:
" It rarely matters much which window function to use. Both Hann and Hamming

are good. But if your pitch detector does differentiations, Hann might be
the preferable one as it differentiates naturally at both ends."

I thought window types were crucial for frequency selectivity? What did you
mean by "if pitch detector does differentiations"?

Cheers,

Danijel Domazet
LittleEndian.com


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Wen Xue
Sent: Tuesday, January 22, 2013 4:31 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Overlap-add settings for pitch detection?

Hi Danijel,

the choice of window size has much to do with what your pitched source is
like. Usually the window should be long enough to include plenty of cycles
to fight against noise, but short enough to finish before any dramatic pitch

change. So if the pitch is known to be stable, the window can be very long.
If not, the ideal window size will depend on how fast the pitch changes. For

periodical pitch modulation the "effective" frame size is expected to be no
more than 1/10 of the modulation period; so for a voice modulated at 6Hz the

"effective" window size should be 16ms or less. Here "effective" refers to
the central part of the window where most energy are located. The actual
window size will be slightly (1.4x~2x) longer.

The overlap rate is very often 1/2 of the window size. Higher overlaps are
more frequently used for synthesis than for analysis. Where time resolution
is critical you can reduce the window size locally.

It rarely matters much which window function to use. Both Hann and Hamming
are good. But if your pitch detector does differentiations, Hann might be
the preferable one as it differentiates naturally at both ends.

Xue

-Original Message- 
From: Danijel Domazet

Sent: Tuesday, January 22, 2013 2:23 PM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Overlap-add settings for pitch detection?

Hi music dsp,
In order to implement accuarate pitch detection we are sending input signal
through Fourier analysis stage. Are there any recommended settings with
regards to:
- Window frame size?
- Window overlap factor?
- Window type (Hamming, Hann, etc.)?


Thanks.

Danijel Domazet
LittleEndian.com



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp

links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] filter smoothly changeable from LP<->BP<->HP?

2013-02-10 Thread Wen Xue
There must be a lot of ways to do it. The simplest I can think of is by 
frequency shifting. Say h(t) is a low-pass filter, then h(t)cos(wt) changes 
smoothly from lowpass to bandpass to highpass as w changes from 0 to pi. If 
you take the Hilbert transform of h(t), say it's g(t),  then 
h(t)cos(wt)+g(t)sin(wt) also meets the requirement, plus preserving the 
bandwidth on the go.


Xue

-Original Message- 
From: Bram de Jong

Sent: Sunday, February 10, 2013 11:23 AM
To: A discussion list for music-related DSP
Subject: [music-dsp] filter smoothly changeable from LPBPHP?

Hello everyone,

does anyone know of a filter design that can smoothly be changed from
LP to BP to HP with a parameter? IIRC LP/AP/HP could be done simply by
perfect reconstruction LP/HP filter pairs, but never seen something
similar for BP in the middle...

The filter doesn't need to be "perfect", it's for something
musical/creative rather than a purely scientific goal...

Any help very welcome! :-)

- Bram

--
http://www.samplesumo.com
http://www.freesound.org
http://www.smartelectronix.com
http://www.musicdsp.org

office: +32 (0) 9 335 59 25
mobile: +32 (0) 484 154 730
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] filter smoothly changeable from LP<->BP<->HP?

2013-02-11 Thread Wen Xue
I think in his "serial" LP-HP topology you're meant to use your parameter to 
control the cut-off frequencies, not the gains. It makes possible to have 
the filter be either LP, BP or HP at ANY time. If you use parallel LP-BP-HP 
and tune the gains, it's very likely that at some point the filter's neither 
LP nor BP nor HP.


But it comes back to the starting point: what's it for? If it's for a 
tuneable EQ then the parallel is ok. If one wants to swipe over the 
frequency ranges the serial might be the choice.


Xue

-Original Message- 
From: robert bristow-johnson

Sent: Monday, February 11, 2013 5:52 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] filter smoothly changeable from LPBPHP?


well, if they're in series, changing the gain on any component in the
chain merely changes the gain on the whole thing.  it does not change
the shape.  so i am not completely understanding what you're suggesting.

i don't know how to get BP in the middle of an LP and HP without
introducing a sorta quadratic gain function.

suppose you have three filters, LPF, BPF, and HPF (let's say you get
them outa the cookbook) all with the same resonant frequency and Q.  now
let's say you have a control parameter, u, that is -1 for LPF, 0 for
BPF, and +1 for HPF.  you can come up with a set of 2nd-order Lagrange
polynomials ( http://en.wikipedia.org/wiki/Lagrange_polynomial ) that
will go through 1 for the filter you want and takes on 0 for the two
filters you don't want.

   for LPF it is:gain_LP(u) =   (1/2)*u*(u-1)

   for BPF it is:gain_BP(u) =   (-1)*(u+1)*(u-1)

   for HPF it is:gain_HP(u) =   (1/2)*u*(u+1)

attach those gains to the corresponding filters and add the results
(filters in parallel) of those weighted filters.  if you're using the
cookbook (or most other definitions), you will see that the denominator
coefficients are the same for all three filters, so these gain factors
apply only to the numerator coefs.  you can smoothly pass from purely
LPF, through purely BPF, up to a purely HPF as u moves from -1 through 0
up to +1.

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] M4 Music Mood Recommendation Survey

2013-02-21 Thread Wen Xue
They have to agree upon some measurement of emotion before making a point at 
each other. Maybe a geometric average of blood pressure and heart rate? I 
remember someone making a connection between music-induced emotion with 
goosebumps on the forearm - he literally counted the bumps!



-Original Message- 
From: Ross Bencina

Sent: Thursday, February 21, 2013 11:19 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] M4 Music Mood Recommendation Survey



On 22/02/2013 9:54 AM, Richard Dobson wrote:

"Listen to each track at least once and then select which track is the
best match with the seed. If you think that none of them match, just
select an answer at random.
"

Now I am no statistician, but with only four possible answers offered
per test, and with "none of the above" excluded as an answer (which
rather begs the question...),


You mean the one about adding to the large number of studies offering
empirical evidence in support of the assumption?


"""However, despite a recent upswing of research on musical emotions
(for an extensive review, see Juslin &Sloboda 2001), the literature
presents a confusing picture with conficting views on almost every topic
in the field.1 A few examples may suffice to illustrate this point:
Becker (2001, p. 137) notes that “emotional responses to music do not
occur spontaneously, nor ‘naturally’,” yet Peretz (2001, p. 126) claims
that “this is what emotions are: spontaneous responses that are dif?cult
to disguise.” Noy (1993, p. 137) concludes that “the emotions evokedby
music are not identical with the emotions aroused by everyday,
interpersonal activity,” but Peretz (2001, p. 122) argues that “there is
as yet no theoretical or empiricalr eason for assuming such specifcity.”
Koelsch (2005,p. 412) observes that emotions to music may be induced
“quite consistently across subjects,” yet Sloboda (1996,p. 387) regards
individual differences as an “acute problem.” Scherer (2003, p. 25)
claims that “music does not induce basic emotions,” but Panksepp and
Bernatzky(2002, p. 134) consider it “remarkable that any medium could so
readily evoke all the basic emotions.” Researchers do not even agree
about whether music induces emotions: Sloboda (1992, p. 33) claims that
“there is a general consensus that music is capable of arousing deep and
signifcant emotions,” yet Konec?ni (2003, p. 332) writes that
“instrumental music cannot directly induce genuine emotions in
listeners.” """

http://www.psyk.uu.se/digitalAssets/31/31194_BBS_article.pdf


Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] crossover filtering for multiband application

2013-02-28 Thread Wen Xue
Not that I pretend to know much theory -- but I think these filters don't 
add up simply because they're not designed to do so. If one wants these 
filters to add up he has to patch them in some way. But at this stage it's 
already complicated by the uncooperative design.


Some observation on phase and group delays in this context: 1+z^-1 and 
1-z^-1 add up; so do 1/(1+0.5z^-1) and 0.5z^-1/(1+0.5z^-1). They all shift 
phases but their sums don't, because they are designed to add up.


xue

-Original Message- 
From: Theo Verelst

Sent: Thursday, February 28, 2013 1:30 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] crossover filtering for multiband application


About the multi band filtering:

 -  *All* filtering you can do, either analog or digital, will
inevitably have phase shifting as a consequence, no matter what people
will try to tell you about correcting networks (check out the theory and
preferably do your homework: ALWAYS is ALWAYS. And "zero phase" is a
term from Control Theory, not filter theory, which REALLY means
something else than "zero phase shift" in general)

 - *All* filters, analog (all the well known filter kinds with the well
known names since early radio technology in the 1900s), and digital, be
it Finite or Infinite impulse response (of course "linear"), are far
from orthogonal, and also are mostly far from adding up to the original
signal when combined+inverted. Check it out, and beware why all the
stuff about amplifiers and audio production never much gets there where
it sounds great: it's a complicated problem where most of the people
working this way have little knowledge of even the most basic theories.

 - *Almost all* digital filters being in use and talked about here will
have very serious, measurable and audible sampling issues. Really, a
44.1kHz sampled digital signal processing system reproducing a 1 kHz
wave with only 45 samples IS GOING TO DISTORT (unles you know exactly
what you are doing and/or the system isn't causal (or has very long
delay)), and WILl HAVE very serious non-linearities, in most cases
discussed here.

 - *Even if* you make a sort of partially (please, check out the theory
and compute you filter amplitude and phase response (MATLAB/Octave
graphs for all I care), and be a bit honest) phase corrected and
somewhat amplitude and squared amplitude "adding up" filter bank in
digital, or in some cases in (more or less linear) analog form, YOU WILL
GET GROUP DELAY issues. ***ALLWAYS*** ... (unless you do other work too,
and know things about the signals you're going to send through the
system, etc.)


No that I like to be the theoretical spoilsport, but unless I observer
more theoretically and practically interesting constructions and
synthetic considerations which can work, the whole field of DSP seems to
be very inviting for generation after generation of quacksalvers, trying
to come across as relevant. This is not very useful, unless people like
their hobby-ing around, of course, no scientific problem with that, but
making things work at a serious EE level is very little served by all
the considerations I've seen here, except of course some subjects get
mentioned, which of course is fine.

So don't be fooled too much by fancy DSP language or grand filter type
names: it isn't easy to built a working multi-compression system,
ESPECIALLY if yo want it to sound good, comply with certain loudness
controlling rules, and work on general signals.

It is possible to create a nice filter bank and get work done by it, to
have a digital workstation do multi-compression on it's synthesized
voices, and to put a CD or other recording through a radio type of
multi-band processor, but in most cases, there are things known about
the signals that go through such machinery, and it every case ot pays to
not forget the main theories at hand, which amoung other things say: the
higher the sampling rate, the more apt (not "the more complicated", but
often "the more natural", as in forming a parallel with mechanical
systems) and of course the more accurate bits being used, the better the
result will sound!

Regards,

  T. Verelst

P.S. I'm aware of the many caps here, but seriously, I've observed DSP
stuff for decades, and can't help feeling there's a great need for
theory. I made well working low distortion, low order Butterworth analog
filter bands for my monitoring systems, 30 band double multi-compressed
recording path DSP from Ladspa blocks running at 192kHz/32bit, sampling
distortion averaging signals paths with mastering effects, and very
successful synthesizer multi and single compressed sounds (a.o. with a
Kurzweil), so I know stuff can be done with good sounding results, but
the directions of many hakl-scientific approaches usually falls through
as insufficient.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://

Re: [music-dsp] crossover filtering for multiband application

2013-02-28 Thread Wen Xue

On 01/03/2013 00:29, robert bristow-johnson wrote:

On 2/28/13 5:44 PM, Wen Xue wrote:
Not that I pretend to know much theory -- but I think these filters 
don't add up simply because they're not designed to do so.


Linkwitz-Riley filters *do* add up to an all-pass filter and they are 
designed to do that.  they *don't* add up to 1, and you can make 
complementary filters add to one, but they don't get as sharp of 
transition slopes that the L-R filters do.


I was following T.V.'s comments which was probably not addressed toward 
L-R but filters in general.
If one wants these filters to add up he has to patch them in some 
way. But at this stage it's already complicated by the uncooperative 
design.


curious as to what is uncooperative.

It simply conveys that the conventional filters are not considerately 
conceived for your convenience to convert them to conform to the 
add-to-1 concord.
Some observation on phase and group delays in this context: 1+z^-1 
and 1-z^-1 add up; so do 1/(1+0.5z^-1) and 0.5z^-1/(1+0.5z^-1). They 
all shift phases but their sums don't, because they are designed to 
add up.


i think what L-R filters do is at a higher order than that.  for LR2, 
in the *s* domain, it's


LPF(s)   =   1 / (1 + s/Q + s^2)
and
HPF(s)   =-s^2 / (1 + s/Q + s^2)

note that there is no "s" term in the numerator of either, which would 
make either LPF or HPF act more like a first-order than a second-order 
filter.  if there is an s term in either, the slope of the "skirt" in 
the stopband would be -6 dB/oct for the LPF and +6 dB/oct for the 
HPF.  but without the middle s term, it's -12 dB/oct and +12 dB/oct as 
a 2nd-order filter oughto be.


so now you add them up:

LPF(s) + HPF(s)  =  (1 - s^2) / (1 + s/Q + s^2)

now, when you set the Q to 1/2, do you see what happens?

so, for a 2-band L-R, you have no choice about Q, but for this 3-band 
"Duelund 3-way crossover", nearly any Q can be used and it adds up to 
the same all-pass filter, but the choice of Q will affect how much 
gain you mix the middle band in and 
This is at the cost that the middle band has s^2 (so only half as sharp 
as the HP with s^4 only and LP with 1). You can do the same with LR2 -- 
just give it a middle band with s^1 in the numeritor and you'll see that 
all Q's will do as long as the poles are real. With the 3-way you can 
handle imaginary poles which you can't do with real LR2.


if the Q is 1/2, that gain is 0; no middle band and we're back to a 
2-band crossover.  actually, that's not quite right either, since for 
the LR2 it's "-s^2" for the HPF. and for the Dueland, it's "+s^2", a 
hard polarity difference.  i have to think about this more.


No we're not back to the Q=1/2 LR2. We would get there with my Q-able 
LR2, but with the 3-way it's s^4, for both LP and HP are squared if you 
remember.


On 2/28/13 8:30 AM, Theo Verelst wrote:


About the multi band filtering:

 -  *All* filtering you can do, either analog or digital, will
inevitably have phase shifting as a consequence, no matter what people
will try to tell you about correcting networks (check out the theory and
preferably do your homework: ALWAYS is ALWAYS.


wire?  (or does that not count as a filter?)


And "zero phase" is a
term from Control Theory, not filter theory, which REALLY means
something else than "zero phase shift" in general)


there *are* linear-phase digital filters which can be legitimately 
thought of as "zero-phase" with a delay.  in the case of quadrature 
processing, the Hilbert Transform filter that we design must have 
delay to be causal and that adds a linear term to the phase that the 
HTF.  so now you add the same linear term to the phase shift of a wire 
(a "zero-phase filter") and what you have left is a delay.  so, to do 
all your quadrature magic, you have to do it to identically delayed 
HTF and wire.



 - *All* filters, analog (all the well known filter kinds with the well
known names since early radio technology in the 1900s), and digital, be
it Finite or Infinite impulse response (of course "linear"), are far
from orthogonal, and also are mostly far from adding up to the original
signal when combined+inverted.


if you have strictly complementary filters, which means the filter 
impulse responses add up to a single (kronecker) delta (the impulse 
response is a delayed impulse):


 h[n]  =   delta[n-D]

then exactly the original signal is recovered, except that it is 
delayed by D samples.



Check it out, and beware why all the
stuff about amplifiers and audio production never much gets there where
it sounds great: it's a complicated problem where most of the people
working this way have little knowledge of even the most basic theories.


well, for analog, it's hard to get linear phase and to get it 

Re: [music-dsp] crossover filtering for multiband application

2013-03-01 Thread Wen Xue



On 01/03/2013 20:18, Theo Verelst wrote:

...
>>  -  *All* filtering you can do, either analog or digital, will
>> inevitably have phase shifting as a consequence, no matter what people
>> will try to tell you about correcting networks (check out the 
theory and

>> preferably do your homework: ALWAYS is ALWAYS.
>
>wire?  (or does that not count as a filter?)

:) Sure. There are things like wavelets with some suitable windowing 
(various kinds, complicated, in most cases theory prefers infinite 
sums/integrals) which can even do some sort of "orthogonal" filtering, 
probably also adding up (within some given accuracy limits) to "pass 
through" when done right. 


Those used in DWT are prefect reconstruction filters that "pass through" 
as accurate as the machine precision allows. The theories have infinite 
sums but the implementations are quite simple and efficient.


It is always a question of when you change a little thing (like 
compression of one or more bands) whether the result, which then 
doesn't anymore add up to a "wire", or simple N-sample delay, is 
something sensible. 


That is exactly what puzzles many beginners, esp. that get hands on 
before reading a real textbook.


Also, many FIR filters sound not very natural because they don't 
implement the equivalent to reasonable accuracy of actual poles and 
zeros, causing in that interpretation signal distortion. 


I think it's all up to the design. You can design IIRs with actual 
zeroes and poles but sound even worse.


And they may well be abused in the signal path for other purposes than 
the main filter function. Of course all kinds of compromises and 
accuracy fixes may be possible and useful.


Also, like with some per sample batch or averages based FFT-based 
filtering, very soon when not simply "reverse FFT-ing the ceptrums", 
the "straight" delay idea is left very far behind. Meaning that the 
actual filter convolution does a whole lot of stuff besides for 
instance an intended frequency/amplitude equalization.


Again it's up to the design. You can do perfect convolution with FFT, 
and you can do other things as well. There are many more things one can 
do with DFT that can't with analog. Is that a blessing or curse?


Also, I meant by distortion, that the power sequence generated by for 
instance "analog equivalent" z-function function in the digital domain 
(with the same form as the jW or "s" transfer function in the Fourier 
domain), when reconstructed into an analog signal (after convolution 
with an input signal, say a test impulse) by either a close to perfect 
reconstruction, a "normal" DA converter oversampling filter, or just 
simple anit-aliasing, isn't going to be a perfect match in relevant 
cases.


That is not very fair. There are no true equivalences between analog and 
digital filters. Analog transfer functions are rational and the digital 
ones are triangular. But I don't say the digital filter's distorting the 
signal just because I can't get the same result as from the analog. 
They're expected to behave differently.


Also, some filters would require the equivalent of "upsampling", which 
theoretically is an infinitely long filter, and to be done accurate 
requires in practice a pretty long (sinc-like) filter. Of course 
simple solutions can work a little magic, but distortion in the sense 
of harmonic, inter modulation and transient distortion is inevitable 
and quite audible, just like "imperfect reconstruction" in all normal 
DA (not AD) convertors.


That's what the gap between 20k and 22.05k is for - allows you to 
upsample with a much shorter filter. If one forgets to leave that gap he 
can't blame DSP for it.


I have the strong impression that in machines like the Kurzweil 
there's a compelling logic starting from the sample preparation, and 
ending with an overall machinery acting on the output signal which 
when the proper DSP and effects are used can make for pretty perfect 
output signals, but I have the impression it's complicated and works 
good only when all the steps are done right. Interesting though.


Also the DSP machinery in audio equipment can take into consideration 
that the human-perceived or electronically measured audio power 
resulting from using all kinds of DSP is kept limited, and made 
human-friendly, and that there is a well working warning mechanism 
when the limits of blasting audio waves into a listening space are 
reached. Many modernistic approaches appear to search for the 
opposite, unfortunately.


Agree to some limit. DSP's so flexible that, forgetting the analog 
origin, one easily get lost in it. Yet it's like a well-tempered piano 
to the ever harmonious lyre. It'd charm beasts and forests to play the 
piano like Orpheus, but Schoenberg may say elseways someday.


xue


Theo V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book 
reviews, dsp links

http://music.columbia.edu/cmc/music-dsp
http://

Re: [music-dsp] Efficiency of clear/copy/offset buffers

2013-03-09 Thread Wen Xue
I think one can trust the compiler to handle a/3.14 as a multiplication. If 
it doesn't it'd probably be worse to write a*(1/3.14), for this would be a 
division AND a multiplication.



-Original Message- 
From: Nigel Redmon

Sent: Saturday, March 09, 2013 5:15 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Efficiency of clear/copy/offset buffers

On Mar 8, 2013, at 2:53 PM, ChordWizard Software  
wrote:
But some are quite new - I never realised that multiplication ops were 
more efficient than divisions.


Worthy of some background...

When multiplying, you can do all the necessary multiplications in parallel 
(think of performing a long multiply by hand—1234 x 5678 for instance. It's 
easy to imagine how you could speed this up by having a few friends help 
you, where you manage the first digit, 4 x 5678, another handles 3(0) x 
5678, etc., at the same time.) but when you divide, you need to finish one 
digit before you know what the remainder is and you can move to the next 
digit. There's no way to look ahead—you need the result of the first step 
before doing the second. So, processors optimize multiplication and addition 
with parallel circuits, but division is iterated in a microcode loop (or 
done entirely in software). The 56K DSPs, for instance have a single-cycle 
multiply, but for division, "DIV" is a single division iteration—you need to 
do it for every digit you need to generate. It's just the nature of the 
operation.


Compilers may help you optimize constants, but it's always best to keep 
track of things yourself so you know what you're getting. So, yes, multiply 
by the sample period instead of dividing by the sample rate, etc.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Songify?

2013-03-19 Thread Wen Xue
In the old days the text to songs must be verse, as opposed to prose, in 
that they needed comply with such things as measure and rhythm, which 
would automatically make some sense when adapted to music with the same 
matrices. This still governs many songs today, although there's the 
tendency of texts going more and more prosaic.


Aligning verse to music is straightforward: verse comes in alternate 
stressed and unstressed syllables, music comes in alternate down and up 
beats. One aligns a stressed syllable to a downbeat and an unstressed 
one to an upbeat. On a wider scale the verse also comes in couplets, as 
music in pairs of phrases. By stretching some stressed syllables over 
multiple beats one can align couplets to phrase pairs seamlessly.


To achieve similar effect by signal processing you'll need to segment 
speech into (the equivalent of) couplets and stressed/unstressed 
syllables, and music into phrase pairs and down/up beats. The closer the 
match between the two structures the better sense it makes, presumably.


xue



On 19/03/13 10:42, Danijel Domazet wrote:

Hi music dsp,
Does anyone know how Songify mobile app works? The one that "turns speech
into music automatically". The app takes two inputs, user speech, and
predefined underlying music (probably pre-analyzed too). The speech is
processed and mixed into the music. It is obvious that pitch-shifter with
heavy auto-tuning does the job, but where and what to pitch-shift in order
for all this to make sense? What would be the steps to achieve something
similar?

Thanks,
Danijel Domazet
LittleEndian.com


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Strange problem in fixed point overlap-addprocessing

2013-03-24 Thread Wen Xue

Does anyone have an idea exactly how discontinuous it should be to be heard?

When I did my DFT modification with OLA it's usually 50% overlap of 1024 (or 
above) point windows Hann or Hann/Hamming or the like. I never heard a click 
or pop. I gathered that as long as it was properly windowed the OLA would 
not incur discontinuities that were not already there in the middle of a 
frame. Is that right or wrong?



-Original Message- 
From: Theo Verelst

Sent: Sunday, March 24, 2013 4:28 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Strange problem in fixed point 
overlap-addprocessing





Hello all,

I am experiencing  a quite frustrating problem these last few days
and I was wondering if anyone has any ideas on how to tackle this.

I am developing a speech processing algorithm on a TI C5515.


First I though you might be working on one of those great superfast TI
8-core dsp boards, but then I read "speech coding" so I suppose it is
more phone or usb stick internet phone stuff..


Interesting subject, of course there are a number of generalities when
working with speech coding (as far as I know it) and FFT based filters:

Robert already indicated the buffer length and processing pipeline
filling issue: when you start out, it will take a while (in almost every
practical case) for your FFT computation to get a result, which then
suddenly appears, hence causing an abrupt change in signal (click).

Of course when you talk about a "real value changing the gain of a set
of FFT transform outputs", you multiply both the REAL and the IMAGINARY
part of the those FFT transform output tuples ? Otherwise you'd change
the phase of those frequency measurements.

Finally, if you use averaging of either the FFT frequency/phase
information or the frame of inverse FFT values based on one se of bins,
you have to decide wether that averaging should be done one a sample for
sample shifting and averaging, or purely frame based, because in case
you multiply one frame of either FFT transform values, or the
back-transformed (to time samples) frame, and the next frame gets a
different amplification factor, you'll hear a discontinuity, just like
when you'd suddenly change volumes with no smoothing!

Theo V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] note onset detection

2013-08-05 Thread Wen Xue
literally "novelty" is something going on now that wasn't before - see now 
perfectly that matches the idea of note onsets. ideally a complete onset 
detector shall test that 1) a note exists at t1 and 2) it hadn't existed at 
t0 to signal a positive between t0 and t1.


for some reason most of today's onset detectors don't work that way, but 
merely look for signs suggestive of onsets with "novelty functions". they 
can still be very useful, but it's good to remember they haven't truly 
finished the job yet.


say you push a humming voice through a pitch estimator: a change in the 
pitch estimate is merely *suggesting* an onset. to confirm it you still need 
to see the note (is and wasn't) there.




-Original Message- 
From: robert bristow-johnson

Sent: Monday, August 05, 2013 9:01 PM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] note onset detection


it seems to need something critical to be defined:  hNoveltyFunc(X, f_s)

i wonder how hNoveltyFunc is defined?

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] PSOLA pitch shifting - resample or not?

2013-10-19 Thread Wen Xue

Maybe a beginner's question here:

when pitch-synchronized OLA is used to modify speech pitch, do we resample 
the original signal or not?


In the traditional view that pitch_shifting = time_scaling + resampling, the 
overlapped parts should be resampled, so that the wave shape of each period 
is preserved. However from what I read there seems to be strong opinion that 
by discarding the resampling part one can modify the pitch while preserving 
formants. Is that true? Which is the adequate way to change the pitch?


Cheers,
Xue 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] PSOLA pitch shifting - resample or not?

2013-10-21 Thread Wen Xue

Many thanks Rob.

I'm somewhat puzzled by the grain size being possibly smaller than N (i.e. 
2Mtime domain. Maybe I'm slow to see the truth but right now it just doesn't 
feel right to me.


Is there some well accepted rationale how we granulize a piece of speech for 
PSOLA? The 2N-rule seems very plausible, for it (combined with Hann window 
of 2N) does give an exact block sampling at rate N. It's not the only option 
to that effect though.


Cheers,
Xue

-Original Message- 
From: Robert Bielik

Sent: Monday, October 21, 2013 3:56 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] PSOLA pitch shifting - resample or not?

Hi again Xue,

Robert Bielik skrev 2013-10-19 16:14:
No. The formant is preserved just by NOT resampling the original signal. 
The pitch of the signal is only dependent on the periodicity
of each wave "granule", which is pretty much a windowed snapshot of the 
original signal with length 2*N where N is the original periodicity.


Further to the point, the windowed granule size should be 2*min(N,M) where N 
is original periodicity and M is target periodicity.


Regards,
/Rob
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] PSOLA pitch shifting - resample or not?

2013-10-22 Thread Wen Xue
One issue I find with 2N is that if you downshift by more than one octave 
you get gaps between the grains. In such case I'm thinking you may use 
something like 3N or 4N or 5N so that the output grains also have ample 
coverage on the time axis. For example if you choose the smallest kN larger 
than 2M, you'll safeguard at least 50% overlap rate in the output.


Xue

-Original Message- 
From: robert bristow-johnson

Sent: Tuesday, October 22, 2013 12:13 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] PSOLA pitch shifting - resample or not?


hey, thanks for picking this up, Rob.  i am still a little bleary-eyed
from the AES convention that ended yesterday.

On 10/21/13 3:56 AM, Robert Bielik wrote:

Hi again Xue,

Robert Bielik skrev 2013-10-19 16:14:
No. The formant is preserved just by NOT resampling the original signal. 
The pitch of the signal is only dependent on the periodicity
of each wave "granule", which is pretty much a windowed snapshot of the 
original signal with length 2*N where N is the original periodicity.


Further to the point, the windowed granule size should be 2*min(N,M) where 
N is original periodicity and M is target periodicity.




this is interesting, but i am not so sure i agree with it.  i've always
been going under the assumption that the grain size is 2N, twice the
length of the input period (and overlapping complementary windows so
that at a shift of 0 cents, there is perfect reconstruction of the
original).  but i always thought that if upshift, there would be more
than 2 overlapping grains.  for a maximum of 1 octave up, i've used a
maximum of 4 overlapping grains.

but i am *very* interested to find out if/that my previous M.O. is wrong.

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] PSOLA pitch shifting - resample or not?

2013-10-23 Thread Wen Xue
Now here is what I understand of the theory behind PSOLA with 2N-sized 
window:


Say the period is N, and we break the signal into Hann-windowed grains of 
size 2N and overlap N. Obviously all these grains are identical apart from a 
shift of kN. Let these grains be h(t+kN), where h(t) is 0 outside (-N,N). 
Then the original signal is a convolution of a pulse train with h(t). PSOLA 
takes this pulse train as the glottal wave and h(t) as the vocal tract 
response. The rest are fairly straightforward. One notable point is that 
wherever you put the Hann windows, they are centred at the glottal pulses by 
definition. Different alignments of the pulses will produce different h(t)'s 
though.


In this scenario LP-PSOLA is but another way to get h(t). In this case h(t) 
is a Hann-windowed grain of size 2N convolved with the LP filter. If the 
LP-residue behaves like noise, then the grains associated with different 
alignments would look much like each other. Not very sure how much that 
helps, but LP-PSOLA does solve the gap issue with large down-shifting.


Xue


-Original Message- 
From: Ross Bencina

Sent: Wednesday, October 23, 2013 2:19 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] PSOLA pitch shifting - resample or not?

Hi Guys,

It seems to me that the missing link here is recognising the theory
behind this approach:

The idea is to isolate each vocal tract filtered glottal pulse in its
own grain (i.e. glottal pulse convolved with the impulse response of the
vocal tract). Thus changing the grain rate is more or less equivalent to
changing the glottal pulse rate leaving the vocal tract IR remains
unchanged (except you're also convolving with a window).

If the IR length is longer than the fundamental period you won't be able
to isolate the pulses exactly. But if the IR is shorter than the period
then you would expect lowering the frequency to add gaps. Similarly,
raising the frequency would increase overlap of each filtered glottal pulse.

What I'd like to know is what's the best way of centering the windows on
the pulses? and is it better to use asymmetrical windows?

Ross.


On 23/10/2013 2:05 AM, Robert Bielik wrote:

Wen Xue skrev 2013-10-22 16:53:

One issue I find with 2N is that if you downshift by more than one
octave you get gaps between the grains.


Exactly. This is the point :) Otherwise you won't get the impression
that you've downshifted the pitch that much.


In such case I'm thinking you may use something like 3N or 4N or 5N so
that the output grains also have ample coverage on the time axis. For
example if you choose the smallest kN larger than 2M, you'll safeguard
at least 50% overlap rate in the output.


Problem is that if you have more than 2N size of grain, you'll introduce
the original pitch in the resulting spectrum (with higher amplitude the
larger the grain gets), and I don't think that is what you want...

/Rob



Xue

-Original Message- From: robert bristow-johnson
Sent: Tuesday, October 22, 2013 12:13 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] PSOLA pitch shifting - resample or not?


hey, thanks for picking this up, Rob.  i am still a little bleary-eyed
from the AES convention that ended yesterday.

On 10/21/13 3:56 AM, Robert Bielik wrote:

Hi again Xue,

Robert Bielik skrev 2013-10-19 16:14:

No. The formant is preserved just by NOT resampling the original
signal. The pitch of the signal is only dependent on the periodicity
of each wave "granule", which is pretty much a windowed snapshot of
the original signal with length 2*N where N is the original
periodicity.


Further to the point, the windowed granule size should be 2*min(N,M)
where N is original periodicity and M is target periodicity.



this is interesting, but i am not so sure i agree with it.  i've always
been going under the assumption that the grain size is 2N, twice the
length of the input period (and overlapping complementary windows so
that at a shift of 0 cents, there is perfect reconstruction of the
original).  but i always thought that if upshift, there would be more
than 2 overlapping grains.  for a maximum of 1 octave up, i've used a
maximum of 4 overlapping grains.

but i am *very* interested to find out if/that my previous M.O. is wrong.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://

Re: [music-dsp] PSOLA pitch shifting - resample or not?

2013-10-26 Thread Wen Xue
one paradigm of asymmetric windows is to convolve a symmetric one (like the 
Hann) with a filter. So what they actually do is 1) inverse-filter the 
sound; 2) PSOLA it with Hann window; 3) filter the outcome. Seems there are 
plenty of papers discussing using the linear-predictive filter in 1) and 3).


But I've been wondering 'bout this: why centre the window with an energy 
bump? and what if there's no bump?


Xue

-Original Message- 
From: robert bristow-johnson

Sent: Sunday, October 27, 2013 9:29 AM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] PSOLA pitch shifting - resample or not?

about the first, i would square the incoming audio and filter it with a
LPF but with a high cutoff frequency.  look for maximum bumps in that
smoothed squared waveform (maximum energy), record the latest bump
location, and *nudge* the window location so that the center of the
windows (assuming a symmetrical Hann-like window) eventually gets
centered around the maximum energy pulses.  in other words, 99% of the
location of the window should be 1 period later (as determined by the
pitch detector) than the previous window location.  and 1% or 2% should
be nudging it either a little earlier or a little later toward the
nearest maximum energy pulse.

if you're doing an asymmetrical window (which i haven't done), perhaps
center the maximum of the window around the maximum energy pulse.  the
problem i have with the non-symmetrical window is making it sufficiently
complementary.  if you have complementary windows (upslope + downslope =
1), then if there is zero pitch shifting, what comes out is an exact
(but delayed) replica of what goes in.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] PSOLA pitch shifting - resample or not?

2013-10-29 Thread Wen Xue

On 29/10/2013 08:42, robert bristow-johnson wrote:


Rob, i think what Thilo is referring to is the subsample positioning 
of an entire grain when it is launched in the PSOLA system.  i have 
heard this done both ways (grain positioning accurate to subsample 
precision vs. accurate only to sample precision) and the difference 
was, to my ears made of clay, virtually inaudible.


Using synthesized signal (i.e. very accurate pitch) I'm finding the 
rounding effect slightly audible at 44.1khz, but as added noise rather 
than jitterish. The effect is easily visible on the spectrogram. You get 
more noise if both the source and destination grains are rounded to 
integer positions, less if only one is.


Xue

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] family of soft clipping functions.

2013-11-02 Thread Wen Xue

But, soft-clipping is not going to change periodicity, is it?

So if you soft-clip a sine wave, be it polynomial or not, the outcome is 
periodical at the same period, so contains only perfect harmonics. It 
cannot behave in the "folded alias" way one usually suspect.


Xue

On 02/11/2013 06:36, robert bristow-johnson wrote:


just to be clear.  the general rule is that an Nth-order polynomial 
can generate images at frequencies up to the Nth multiple of the 
frequency of the original baseband image.  it is sufficient to 
oversample by a factor of (N+1)/2 to prevent any of these generated 
images from potentially folding back into the baseband.  e.g. 
3rd-order softclipping requires upsampling by a factor of 2.  another 
e.g. 7th-order softclipping requires upsampling by a factor of 4 to 
avoid any folded aliases from contaminating the original baseband.




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Frequency bounded DSP

2014-01-03 Thread Wen Xue
Why not just inverse-transform any band-limited spectrum? (or have I got the 
question wrong?)



-Original Message- 
From: Theo Verelst

Sent: Friday, January 03, 2014 8:49 AM
To: A discussion list for music-related DSP
Subject: [music-dsp] Frequency bounded DSP


Hi all,

A little theoretical experiment, with practical applicability.

Everybody has heard of the idea of "Aliasing Distortion", right ?

So what can we do to prevent it? We want to prevent it because there's a
beautiful Sampling+Reconstruction theory, that says we can sample *any*
signal that is frequency limited, such that it has no frequency
components above half the sampling frequency, and from those samples we
can *perfectly* reconstruct the original, actual signal.

So the next question is, that I referred to a little while back, how can
we construct such signals, and can we construct those signals at all,
even. Well, evidently we can, a sine wave starting at t=-inf can be
Fourier analyzed to have a very simple spectrum (a single peak
corresponding to the sine frequency and with the size of it's amplitude,
depending on your mathematical analysis type).

So if we take a sine wave signal, lower than the Nyquist Frequency,
we're good: we sample the sine wave at equal distance, and we have
complied with sampling theory, so in principle we can entrust the signal
to be properly put into a zero-distortion sine wave coming out of our
(ideal) speakers, provided we have a perfectly reconstructing Digital to
Analog Converter (which we don't yet, but we can high quality upsample,
pre-analyze and somewhat even outthe errors and make something of it
anyway, practically for the moment).

Advantages: the signal is completely sample-independent, meaning the
theory guarantees the samples are unnoticeable, no matter what the exact
phase of the sine is, where the zeros and peaks fall between the
samples, and the whole theory is theoretically perfectly linear, so we
can use the waveform decomposition theory to built *any* waveform by
adding the proper number of sinewaves and their proper amplitude.

A few problems remain, one being that we'd like to be able to enforce
some sort of modulation, at least some form of amplitude modulation.
That poses us with the problem of the choice of t=0, the shifting of the
s-transform, transient behavior that possibly isn't frequency limited,
etc, but for the moment, for this experiment, we can make an example of
a simple amplitude modulation of one sine wave with another, which,
computable with the proper LaPlace/Fourier theory, leads to a signal
which looks like a repeating wave-"envelope" (AM modulation), and a
spectrum containing the sum and difference frequency of the sine
components of  being multiplied. AS long as sum and difference are lower
than Nyquist and >0, we're cool: we have a perfectly frequency limited
signal, so the perfect reconstruction part of the Sampling Theory states
we can make a perfect analog equivalent signal out of this. Of course,
the waves in principle will have to start at t=-inf, but that leads to
deeper theoretical considerations.

Now, can we do better, can we make, say, some form of "other" envelope
that is still frequency limited ?

T.V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Frequency bounded DSP

2014-01-03 Thread Wen Xue
i have an impression that no band-limited signal can remain constant for any 
duration above zero. if that holds then one just can't switch it on and off 
and still expect it bounded below nyquist.


w.x



-Original Message- 
From: Theo Verelst

Sent: Saturday, January 04, 2014 12:46 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Frequency bounded DSP

Well, the questions are good, there's a reason for trying to make a
theoretically bandwidth limited wave and envelope, and if possible also
modulations, or something that resembles modulations.

The "AM" example is random in itself, but indeed says something about
the possibility to modulate a wave, and that there's then as a result an
infinitely correct expression for a wave that can be sampled error free.

Of course if somehow magically or not you are able to say "I have this
function" or "this set of samples", for which you know that "the
spectrum" is limited, there's no reason not to use that, and to multiply
an equally known-as-limited envelope with that.

But what I did was make sure all the normal theory should be without
problems, yet without creating a difficult equation. Like I said,
there's nothing wrong with taking *any* number of sine waves, as long as
each individual sine wave is lower than Nyquist, and add them up with
any phase relation, to create an actually perfectly bandwidth limited
wave. Good.

But now we switch the wave on at some point. Or we want to determine the
theoretical spectrum of a non-repeating envelope, how can we, preferably
in an elegant and/or simple enough way, get this done ? Hint: a "step
function" has infinite spectrum.

Oh, and another thing: an iFFT creates sine waves, so that's cool. But
for many tonal applications, there's only a pretty limited number of
frequencies that properly "fit" in the bins of the FFT. Surely there are
also "Equal Loudness Curve" and "limiting the mid-range reflections"
criteria to add to the "perfectly re-constructable waves" idea.

T.V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Square-summability for the ideal low pass filter

2014-01-14 Thread Wen Xue
To come back to the math, does the Fourier transform of the ideal lowpass 
filter converge at wc at all?



-Original Message- 
From: Dave Gamble

Sent: Tuesday, January 14, 2014 6:52 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Square-summability for the ideal low pass filter

Didnt mean to start a religious war. ASCII, latex, whatever, same deal.
Anything that can be read in the body of the email be anyone. Attachments
struck me as lazy.

Correct me on my maths, not my hasty advice regarding how to write emails
that are likely to be understood here. Consider my suggestion retracted.

On Monday, 13 January 2014, robert bristow-johnson wrote:


On 1/13/14 11:50 AM, Charles Z Henry wrote:


On Mon, Jan 13, 2014 at 3:24 AM, Dave Gamble
 wrote:

 Don't post MS word files to the list. Learn latex notation and use it as

plain text.
Any replies you get will be latex style, as is mine below.




oh?

 I do agree that MS word files would be hard to read for some (especially
since MS likes to change those formats every so often and you can't 
always

count on FOSS tools to read them).  Some others will just be unwilling.

However, not all replies will be latex style.  Dave speaks only for
himself.  It's not a rule of the list.




i dunno if Douglas made the list into anything other than plain text or
not, but i am assuming it's still plain ASCII as are the USENET 
newsgroups,

like comp.dsp.

i post using "ASCII math" (distant cousin to ASCII art).  i assume the
reader will be reading with a mono-spaced font and i use spaces, not tabs.
 so reading any math posted from me should be reasonably apparent. 
reading

LaTeX and translating that in my brain is a pain in the arse.

even at the Signal Processing Stack Exchange, where LaTeX support is
provided, it's a real pain-in-arse to set up every reply using the math
pasteup, but i do it.

here, ASCII math is good enough, me thinks.  no one needs to post in LaTeX
and, if you want people to read your math readily, i would not recommend
posting in LaTeX.  but if the topic is important or interesting to me, i
*may* choose to translate the LaTeX in my head.  but i don't wanna do 
that.



--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Does a "neutral wire" exist

2014-04-08 Thread Wen Xue
Just copy an mp3 file from one drive to another and there you have this 
perfectly neutral wire.

Don't use itunes though.


-Original Message- 
From: Theo Verelst

Sent: Wednesday, April 09, 2014 6:44 AM
To: A discussion list for music-related DSP
Subject: [music-dsp] Does a "neutral wire" exist


Not in the electronics sense, like gold plated, electrically shielded,
with 1/3 of the speed of light and such, but in the digital sense!

I mean, a "wire" would be a digital connection, between software
modules/processes or between digital audio machines, which only
transfers information, preferably with low delay, and no effect on the
information.

Of course this is easy to implement, but think about it: how often does
a program/module/machine offer the option to record and playback or to
transfer simply the information fed to it, with no resampling, no adding
of blanks, no slight processing where you don't anticipate it (this
happens in audio editing programs), no volume change, etc. ?

How did I come to think of this is because I'm very satisfied with my
USB-I2C and I2C-DA convertor boards I mentioned here to be suitable for
using with DSP (board and software) projects.

T.V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Combining ADCs/DACs to increase bit depth?

2014-04-26 Thread Wen Xue
Of course this is possible but it can be cheaper to skip the analogue part 
and do it purely digitally.



-Original Message- 
From: Joe Farrish

Sent: Saturday, April 26, 2014 12:26 AM
To: A discussion list for music-related DSP
Subject: [music-dsp] Combining ADCs/DACs to increase bit depth?


Combining ADCs/DACs to increase bit depth? I was wondeing if anyone has done 
this or if it is even possible.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] 20k

2015-08-28 Thread Wen Xue

Tried fast convolution?


-Original Message- 
From: Gunnar Eisenberg

Sent: Saturday, August 29, 2015 4:52 AM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] 20k

Dear list,

I'm trying to implement a 20,000 coefficients FIR filter but for some reason 
it is a bit slow on my system (Pentium III 500Mhz).


Any suggestions on how to fix my problem?

Have a nice weekend and carry on... :-)

Gunnar

---

Sent on the go...

Gunnar Eisenberg
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp 


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] 20k

2015-08-28 Thread Wen Xue
outlook express

From: b...@bobhuff.com 
Sent: Saturday, August 29, 2015 10:51 AM
To: music-dsp@music.columbia.edu 
Subject: Re: [music-dsp] 20k

What tools are you using?



From: Wen Xue 
To: music-dsp@music.columbia.edu 
Sent: Friday, August 28, 2015 7:10 PM
Subject: Re: [music-dsp] 20k


Tried fast convolution?


-Original Message- 
From: Gunnar Eisenberg
Sent: Saturday, August 29, 2015 4:52 AM
To: music-dsp@music.columbia.edu
Subject: [music-dsp] 20k

Dear list,

I'm trying to implement a 20,000 coefficients FIR filter but for some reason 
it is a bit slow on my system (Pentium III 500Mhz).

Any suggestions on how to fix my problem?

Have a nice weekend and carry on... :-)

Gunnar

---

Sent on the go...

Gunnar Eisenberg
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp






___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] efficient running max algorithm

2016-07-21 Thread Wen Xue

The trick is to count the total number of operations, not for each incoming 
sample as the window moves. 

The algorithm maintains a buffer that pushes at the back and pops at both front 
and back. Each sample is pushed onto the buffer and popped out of it exactly 
once. If R samples are popped from the front, then N-R are popped from the 
back. All N pushes are at the back.

Each comparison with an incoming sample leads to either one push or one pop at 
the back. It naturally follows that the total of N-R pops and N pushes at the 
back costs 2N-R comparisons. There are also N yes/no comparisons to determine 
whether to pop a sample from the front as the window moves on.  So yes, the 
total is O(N) regardless of w.

-Xue


From: Ethan Fenn___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] BW limited peak computation?

2016-07-25 Thread Wen Xue
I suggest the cubic spline interpolator. It expresses the underlying function 
as piecewise trinomial so that the maxima/minima can be computed by solving 
binomial equations. It is also known to be close to the ideal sync 
interpolation alias-wise. 

Xue



From: Paul Stoffregen___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp