Re: [music-dsp] Book: The Art of VA Filter Design 2.1.0

2018-11-01 Thread Bjorn Roche
This looks like an amazing resource -- I hadn't seen it before. Thanks for
sharing your knowledge!

bjorn

On Wed, Oct 31, 2018 at 6:20 AM Vadim Zavalishin <
vadim.zavalis...@native-instruments.de> wrote:

> Announcing a small update to the book
>
>
> https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.1.0.pdf
>
> New additions:
> - Generalized ladder filters
> - Elliptic filters of order 2^N
> - Steepness estimation of elliptic shelving filters
>
> Regards,
> Vadim
>
> --
> Vadim Zavalishin
> Reaktor Application Architect
> Native Instruments GmbH
> +49-30-611035-0
>
> www.native-instruments.com
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>

-- 
-
Bjorn Roche
bjornroche.com
@bjornroche
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] ZikiChombo

2018-09-06 Thread Bjorn Roche
Looks interesting and I took a quick look. Go is the first garbage
collected language I could see doing professional level audio processing,
so this is pretty cool.

Just a question on your dependency graph: Why does DSP depend on Plugins?

On Thu, Sep 6, 2018 at 9:25 AM, Scott Cotton  wrote:

> Hi all,
>
> I wanted to announce a new open source sound processing and i/o project
> for the
> Go programming language, at http://zikichombo.org.
>
> We're just starting and looking for interested people to engage,
> contribute, and help guide the project to being as useful as possible.
>
> For music dsp tech, we have only some basic dsp functionality in place so
> far, but we've also got a lot on our radar and in our plans that may be of
> interest here.
>
> Best,
> Scott
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] glitch free looping

2017-02-10 Thread Bjorn Roche
On Fri, Feb 10, 2017 at 11:16 AM, Martin Hermant
 wrote:

> @bjorn :
> if i use some fading i think leaning toward crossfading to avoid audio 
> silences part
> these 1/60th of second « gap » is audible right?

I believe I made an assumption that you were talking about looping
rhythms and now I'm not sure if that's true. The solution I described
is something that I found worked well for many sources and was very
simple, but it does depend a lot on what you are trying to do. As r
b-j alluded, the rabbit hole can get very deep, especially if you are
talking about loop points for notes or something like that. If the
1/60th second dropout is likely to matter for your application, then
my solution is useless :(

bjorn
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] glitch free looping

2017-02-10 Thread Bjorn Roche
I've done this before. I can't recall for sure how I solved it, but,
off the top of my head, this is what worked:

Zero crossing for start points and fade-outs for the end points using
a cosine or s-curve works well. There are small problems: finding the
exact right fade-out time without user interaction may be tricky, but
I recall it being shorter than expected (perhaps 1/60th of a second?).
I don't recall ever having to program any sort of "intelligence" into
that. To ensure the zero crossing always works, you may want to
high-pass the incoming audio in case there's a strong DC-offset.

I'm sure other techniques work, like fade-in instead of zero-crossing
for the start-point, but then you have to deal with making sure you
don't weaken the initial transient, which is very important in a loop,
or get too much of the sound before the initial transient.


On Fri, Feb 10, 2017 at 9:56 AM, Martin Hermant
 wrote:
> hi guys,
> we are currently developing an audio looper (soon to be released in 
> opensource!)
> I’m facing the classical textbook case problem of glitch/pop free looping
>
> Problematic:
> loops have a defined number of sample (can’t change as it is bound to tempo)
> looping point need to be glitch free (no pop due to sudden sample variation 
> between end and beginning of loop)
> need to keep simple playback code ,i.e best if the buffer is processed once 
> then read « normally » (with less possible conditional cases handling these 
> pops in audio callback)
>
> fades in / out is a simple solution but create fake transient at loop points 
> on sustained sounds
>
> Solution implemented so far:
> -classic fade in fade out
> - zero pad befor and after first and last zero-crossing -> still got pops if 
> no fade
>
> Solution I can think about :
> fade in beginning of loop before end point (complexifies playback code, 
> different case when loop need to stop at end)
> some sort of stretching ensuring sample continuity
>
>
> any one can point me to some useful ressources on this?
> any advice or neat techniques?
>
>
> Thanks
>
> Martin
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp



-- 
-
Bjorn Roche
bjornroche.com
@bjornroche
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Recognizing Frequency Components

2017-01-26 Thread Bjorn Roche
On Thu, Jan 26, 2017 at 2:40 PM, Martin Klang  wrote:

> try putting < instead of less-than.
>
That's exactly what I did last time. My impression is that Blogger doesn't
have a canonical data representation.


> On 26/01/17 19:28, Bjorn Roche wrote:
>
> On Thu, Jan 26, 2017 at 2:09 PM, Alan Wolfe  wrote:
>
>> It's some HTML filtering happening somewhere between (or including) his
>> machine and yours.
>>
>>
> It's Blogger. I've fixed this before and apparently it comes back :(. I
> think Google's more or less abandoned Blogger. I'd switch to something else
> if I ever blogged anymore.
>
> bjorn
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Recognizing Frequency Components

2017-01-26 Thread Bjorn Roche
On Thu, Jan 26, 2017 at 2:09 PM, Alan Wolfe  wrote:

> It's some HTML filtering happening somewhere between (or including) his
> machine and yours.
>
>
It's Blogger. I've fixed this before and apparently it comes back :(. I
think Google's more or less abandoned Blogger. I'd switch to something else
if I ever blogged anymore.

bjorn


-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Recognizing Frequency Components

2017-01-26 Thread Bjorn Roche
I wrote a blog post a while ago about how to use FFT to find the pitch of
an instrument. As I mention in the post, this is hardly the best way, but I
think it's suitable for many applications. For example, you could write a
perfectly serviceable guitar tuner with this.

The post links to code and includes some discussion of specific issues of
time/frequency resolution and so on.

I've been wanting to write about other methods, but... maybe when I retire
:)

http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html

On Thu, Jan 26, 2017 at 12:36 PM, Evan Balster  wrote:

> Philosophy rant:  Frequency is a model.  You can use tools that build on
> that model to describe your signal in terms of frequency, but none of them
> are going to be perfect.  A pure 10hz tone is a mathematical abstraction
> which you'll not find in any digital signal or measurable phenomenon.  But, 
> *ooh
> boy!* is that abstraction useful for modeling real things.
>
> If you have an extremely clean signal and you want an extremely accurate
> measurement, my recommendation is to forgo fourier transforms (which
> introduce noise and resolution limits) and use optimization or measurement
> techniques in the time domain.  In your example, *zero crossings are the
> easiest and best solution* as Steffan suggests.
>
> Another interesting approach, which I mention for scholarly purposes,
> would be to design a digital filter with a sloping magnitude response (even
> the simplest one-pole lowpass could do) and apply it across the signal.
> You can measure the change in the signal's power (toward the end, because
> the sudden beginning of a sine wave produces noise) and find the frequency
> for which the filter's transfer function produces that attenuation.  This
> filter-based technique (and related ones) can generalize to other problems
> where zero-crossings are less useful.
>
> – Evan Balster
> creator of imitone <http://imitone.com>
>
> On Thu, Jan 26, 2017 at 9:20 AM, STEFFAN DIEDRICHSEN 
> wrote:
>
>> At that length, you can count zero-crossings. But that’s not a valid
>> answer, I’d assume.
>> But I found a nice paper on determining frequencies with FFTs using a
>> gaussian window.  Pretty accurate results.
>>
>> Best,
>>
>> Steffan
>>
>>
>> On 26.01.2017|KW4, at 15:24, Theo Verelst  wrote:
>>
>> Say the sample length is long enough for any purpose, like 10 seconds.
>>
>>
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Can anyone figure out this simple, but apparently wrong, mixing technique?

2016-12-12 Thread Bjorn Roche
On Sat, Dec 10, 2016 at 6:35 PM,  wrote:

> >>Message: 1
> >>Date: Sat, 10 Dec 2016 14:31:37 -0500
> >>From: "robert bristow-johnson" 
> >>To: music-dsp@music.columbia.edu
> >>Subject: [music-dsp] Can anyone figure out this simple, but apparently
> >>  wrong, mixing technique?
> >
> >>it's this Victor Toth article:?http://www.vttoth.
> com/CMS/index.php/technical-notes/68 and it doesn't seem to make sense to
> me.
> >>
> >>it doesn't matter if it's 8-bit offset binary or not, there should not
> be a multiplication of two signals in the definition.
> >>i cannot see what i am missing. ?can anyone enlighten me?
>
> Search for "automixer". The author is not mixing individual samples, he
> is using observed signal magnitudes (that have time constants associated
> with them) to determine desired signal magnitudes, and from those
> desired magnitudes he is calculating channel gains.
>
> At least I hope that's what he's doing.
>

I've seen people reference this article on StackOverflow. Regardless of
intention, it seems like it is causing some confusion. Here's a reference
that seems illuminating:

https://stackoverflow.com/questions/32019246/how-to-mix-pcm-audio-sources-java

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread Bjorn Roche
On Fri, Sep 16, 2016 at 1:30 PM, Spencer Jackson 
wrote:

> On Fri, Sep 16, 2016 at 11:24 AM, gm  wrote:
> > Did you consider a reverb or an FFT time stretch algorithm?
> >
>
> I haven't looked into an FFT algorithm. I'll have to read up on that,
> but what do you mean with reverb? Would you feed the loop into a
> reverb or apply some reverberant filter before looping?
>

IIRC, the original free-verb had a "hold" or "freeze" function that worked
well. I believe it worked by setting each individual comb filter to 100%
feedback.

Free verb worked surprisingly well considering its simplicity. The magic
had more to do with the hand tuning of the parameters than anything else,
so if a simple reverb is an approach you want to take, I suggest starting
by copying the original parameters, which you can find here:

https://ccrma.stanford.edu/~jos/pasp/Freeverb.html

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Wavelet Matching

2016-09-05 Thread Bjorn Roche
One open source example is:

https://acoustid.org

I thought there was at least one more, but I can't find it, so maybe that's
it.

Roughly speaking, and off the top of my head, most of these algorithms work
in three steps:

1. A time/frequency analysis method (STFT, filter bank, etc) to find
"peaks" in various frequencies.
2. Some clever methods of robustly and uniquely tagging the relationships
between the "peaks" found in step #1.
3. A simple database of tags as keys and songs/recordings as values.

I don't know how well this would work for the speech example.

As for wavelets, conceivably, step 1 could employ wavelets, but I haven't
seen it. I can think of reasons why wavelets would be a poor choice for
that sort of analysis in music, but perhaps it makes more sense for speech.
Anyway, I'll let someone more expert comment on that.

If you want to read more, patents are the most readable source I've seen on
the subject. Here's the Shazam patent:
https://www.google.com/patents/US7627477



On Mon, Sep 5, 2016 at 3:23 AM, Andy Farnell 
wrote:

> Wavelets are not necessarily a part of this algorithm. The key components
> are understanding;
>
>  hashing
>  sparse arrays
>  red black tree search
>
> As a staring lead you could begin searching on:
> "MIR Plumbley  Abdallah Fujihara Klapuri""
>
> cheers,
> Andy
>
> On Mon, Sep 05, 2016 at 02:18:50AM +, Michael Feldman wrote:
> > Hello All Music DSP members,
> > I am SQL developer and I am interested to learn about the music
> algorithms.  Thank you for letting me join the email list and I discovered
> the archives so I will be sifting those!
> > I am researching models and I was interested to know if there were
> previous open source algorithms similar to Shazam that could be brought to
> the public?
> > I would like to find/create open source version of audio search within a
> population audio set.  The goal is to have a find function. It will be able
> to submit a population MP3 file. And small MP3 sample file that is short.
> The delivery system will find times in population that the original sample
> occurred.
> > So for example if you submit "I have a dream" speech from start to end
> as the population. And then submit one of the times he said word "dream" so
> the output will show the time that exact pronunciation was sampled and
> other times that sound (word) was used as slightly deviated when it was
> repeated and calculate deviations.
> >
> > Thank You,
> > Michael
>
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
> --
> -
> Bjorn Roche
> bjornroche.com
> @bjornroche
> <https://lists.columbia.edu/mailman/listinfo/music-dsp>

<https://lists.columbia.edu/mailman/listinfo/music-dsp>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Faster Fourier transform from 2012?

2016-08-22 Thread Bjorn Roche
A guy on the linkedin DSP group posted about "System Compression" as well,
which is related to compressive sensing. I haven't looked into it at all,
but here's the link:

https://www.linkedin.com/groups/144812/144812-6155834561751191554

In case you can't access that link, he doesn't give much info about how
System Compression works, but he links variously to this stuff:

http://qualvisual.net/Rapid.php
https://arxiv.org/ftp/arxiv/papers/1311/1311.5831.pdf
http://dsp.rice.edu/cs
http://dsp.rice.edu/sites/dsp.rice.edu/files/cs/CSintro.pdf


On Mon, Aug 22, 2016 at 10:06 AM, Alan Wolfe  wrote:

> Thanks for the info, very interesting! (:
>
> On Sun, Aug 21, 2016 at 8:34 PM, Ross Bencina 
> wrote:
>
>> [Sorry about my previous truncated message, Thuderbird is buggy.]
>>
>> I wonder what the practical musical applications of sFFT are, and whether
>> any work has been published in this area since 2012?
>>
>>
>> > http://groups.csail.mit.edu/netmit/sFFT/hikp12.pdf
>>
>> Last time I looked at this paper, it seemed to me that sFFT would
>> correctly return the highest magnitude FFT bins irrespective of the
>> sparsity of the signal. That could be useful for spectral peak-picking
>> based algorithms such as SMS sinusoid/noise decomposition and related
>> pitch-tracking techniques. I'm not sure how efficient sFFT is for "dense"
>> audio vectors however.
>>
>>
>> More generally, Compressive Sensing was a hot topic a few years back.
>> There is at least one EU-funded research project looking at audio-visual
>> applications:
>> http://www.spartan-itn.eu/#2|
>>
>> And Mark Plumbley has a couple of recent co-publications:
>> http://www.surrey.ac.uk/cvssp/people/mark_plumbley/
>>
>> No doubt there is other work in the field.
>>
>> Cheers,
>>
>> Ross.
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] R: Anyone using unums?

2016-04-15 Thread Bjorn Roche
On Fri, Apr 15, 2016 at 11:09 AM, Alan Wolfe  wrote:

> They aren't full sized lookup tables but smaller tables. There are
> multiple lookups ORd together to get the final result.
>
Ah, I see. I was confused by the relationship between the ORs and the LUTs.

> I don't understand them fully yet, but I ordered his book and am going to
> start trying to understand them and make some blog posts with working
> example C code.  I'll share with the list (:
>
Looking forward. Personally, I find this interesting even if there's no
immediate application.

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] R: Anyone using unums?

2016-04-15 Thread Bjorn Roche
I can see this being applicable to:

- GPUs, especially embedded GPUs on mobile, where low precision floats are
super useful, and exact conformity to IEEE isn't (at least, I don't think
conformity is part of any of the usual specs, but it may be)
- Storage (I have an application now that could benefit from some sane
low-precision floats. We are considering IEEE half floats -- yuck!)

However, I am confused by the arithmetic. Is the author seriously proposing
that all arithmetic be done by LUTs, or am I misunderstanding something?
Seems like a joke since he literally compares it to "Cant add, doesn't even
try". For smaller precision, LUTs seem workable, but if you have to fetch a
number from a large LUT for every operation you can't really do that in one
clock tick, since, in practice, you have to go off die. Anyway, if LUTs
made sense for, say, 32-bit floating point math, couldn't we also use LUTs
for current IEEE floats?

Still, even if this is utter nonsense, I'm glad to see someone rethinking
floats on a fundamental level.

On Fri, Apr 15, 2016 at 10:38 AM, Ethan Fenn  wrote:

> Sorry, you don't need 2^256 bits, my brain was just getting warmed up and
> I got ahead of myself there. There are 2^256 different SORNs in this
> scenario and you need 256 bits to represent them all. But the point stands
> that if you actually want good precision (2^32 different values, for
> instance), the SORN concept quickly becomes untenable.
>
> -Ethan
>
>
>
> On Fri, Apr 15, 2016 at 9:03 AM, Ethan Fenn 
> wrote:
>
>> I really don't think there's a serious idea here. Pure snake oil and
>> conspiracy theory.
>>
>> Notice how he never really pins down one precise encoding of unums...
>> doing so would make it too easy to poke holes in the idea.
>>
>> For example, this idea of SORNs is presented, wherein one bit represents
>> the presence or absence of a particular value or interval. Which is fine if
>> you're dealing with 8 possible values. But if you want a number system that
>> represents 256 different values -- seems like a reasonable requirement to
>> me! -- you need 2^256 bits to represent a general SORN. Whoops! But of
>> course he bounces on to a different topic before the obvious problem comes
>> up.
>>
>> -Ethan
>>
>>
>>
>> On Fri, Apr 15, 2016 at 4:38 AM, Marco Lo Monaco <
>> marco.lomon...@teletu.it> wrote:
>>
>>> I read his slides. Great ideas but the best part is when he challenges
>>> Dr. Kahan with the star trek trasing/kidding. That made my day.
>>> Thanks for sharing Alan
>>>
>>>
>>>
>>> Inviato dal mio dispositivo Samsung
>>>
>>>
>>>  Messaggio originale 
>>> Da: Alan Wolfe 
>>> Data: 14/04/2016 23:30 (GMT+01:00)
>>> A: A discussion list for music-related DSP 
>>>
>>> Oggetto: [music-dsp] Anyone using unums?
>>>
>>> Apologies if this is a double post.  I believe my last email was in
>>> HTML format so was likely rejected.  I checked the list archives but
>>> they seem to have stopped updating as of last year, so posting again
>>> in plain text mode!
>>>
>>> I came across unums a couple weeks back, which seem to be a plausible
>>> replacement for floating point (pros and cons to it vs floating
>>> point).
>>>
>>> One interesting thing is that division is that addition, subtraction,
>>> multiplication and division are all single flop operations and are on
>>> "equal footing".
>>>
>>> To get a glimpse, to do a division, you do a 1s compliment type
>>> operation (flip all bits but the first 1, then add 1) and you now have
>>> the inverse that you can do a multiplication with.
>>>
>>> Another interesting thing is that you have different accuracy
>>> concerns.  You basically can have knowledge that you are either on an
>>> exact answer, or between two exact answers.  Depending on how you set
>>> it up, you could have the exact answers be integral multiples of some
>>> fraction of pi, or whatever else you want.
>>>
>>> Interesting stuff, so i was curious if anyone here on the list has
>>> heard of them, has used them for dsp, etc?
>>>
>>> Fast division and the lack of denormals seem pretty attractive.
>>>
>>> http://www.johngustafson.net/presentations/Multicore2016-JLG.pdf
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>>
>>>
>>> ___
>>> dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>>
>>
>>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-01 Thread Bjorn Roche
On Tue, Mar 1, 2016 at 11:12 AM, Phil Burk  wrote:

> I use biquads in JSyn. The coefficients are calculated using RBJ's
> excellent biquad cookbook from the music-dsp archives.
>
> I have found that I can recalculate and update the filter coefficients on
> the fly without unpleasant artifacts. I do NOT zero out or modify the
> internal state variables. I should think that setting them to zero would
> sound pretty bad.
>
> Generally the filter settings are changed gradually using a knob or driven
> by an LFO or envelope.
>

This last point is the key -- as long as you change the parameters slowly,
you are *probably* okay simply updating the parameters. However, if your
parameters are subject to rapid changes you need another solution. You
can't get away with that in a DAW with automation, for example.

bjorn

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-01 Thread Bjorn Roche
This is a common and well-researched problem. The two solutions are usually:

1. cross-fade between two filter settings (this actually works reasonably
well)
2. use a filter architecture that is guaranteed to be stable for
intermediate states. (e.g., I believe lattice filters have this property,
but I'm a bit rusty)

bjorn


On Tue, Mar 1, 2016 at 9:56 AM, Paul Stoffregen  wrote:

> Does anyone have any suggestions or publications or references to best
> practices for what to do with the state variables of a biquad filter when
> changing the coefficients?
>
> For a bit of background, I implement a Biquad Direct Form 1 filter in this
> audio library.  It works well.
>
> https://github.com/PaulStoffregen/Audio/blob/master/filter_biquad.cpp#L94
>
> There's a function which allows the user to change the 5 coefficients.
> Lines 94 & 95 set the 4 filter state variables (which are 16 bits, packed
> into two 32 bit integers) to zero.  I did this clear-to-zero out of an
> abundance of caution, for concern (maybe paranoia) that a stable filter
> might do something unexpected or unstable if the 4 state variables are
> initialized with non-zero values.
>
> The problem is people wish to change the coefficients in real time with as
> little audible artifact as possible between the old and new filter
> response.  Clearing the state to zero usually results in a very noticeable
> click or pop sound.
>
> https://github.com/PaulStoffregen/Audio/issues/171
>
> Am I just being overly paranoid by setting all 4 state variables to zero?
> If "bad things" could happen, are there any guidelines about how to manage
> the filter state safely, but with with as graceful a transition as possible?
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>


-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] MIDI Synth Design Advice Please

2016-02-01 Thread Bjorn Roche
On Mon, Feb 1, 2016 at 10:10 AM, Scott Gravenhorst 
wrote:



>  I've also been experimenting with threads and CPU affinity as well as
> isolcpu to isolate cores.  My assumption (which could be incorrect) is that
> isolated cores will run at near bare metal efficiency because the
> interrupts from random devices and other mundane kernel tasks will be
> handled by the core or cores left for the kernel's use and that the clocks
> of the isolated core or cores can be used to generate samples with more
> time deterministic properties than would be without isolated cores.
>

I have no idea if that will work or not, but it sounds like you are
thinking about the right things. I'll just note that some operating systems
make specific information available about how long it takes them to do
things like handle an interrupt and return execution to a high priority,
user-domain process (I'm thinking of "real-time" operating systems like QNX
and Irix). This can be useful for determining how large your buffers need
to be, even if your cores are interrupted, and even if need to handle
unexpected situations. I don't know if Linux does this, but since you are
running on fixed hardware, you should be able to determine optimal buffer
sizes and so on through testing.

bjorn

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Anyone using Chebyshev polynomials to approximate trigonometric functions in FPGA DSP

2016-01-19 Thread Bjorn Roche
Years ago I attended a talk on Chebyshev polynomials and someone asked if
they could be used to approximate trig functions. My memory is hazy at
best, but here's what I recall: the answer was something like "you could,
but it would be very slow, so in practice I think that would be a bad
idea". Someone else said that the CORDIC algorithm is what's used in pocket
calculators for approximating trig functions. I remember hm going on to say
something like "Unfortunately, the CORDIC algorithm is mind-numbingly
boring and Chebychev polynomials are really interesting".

Well, to each his own. Perhaps you'll enjoy CORDIC:

https://en.wikipedia.org/wiki/CORDIC


On Tue, Jan 19, 2016 at 1:56 PM, Alan Wolfe  wrote:

> Chebyshev is indeed a decent way to approximate trig from what I've read. (
> http://www.embeddedrelated.com/showarticle/152.php)
>
> Did you know that rational quadratic Bezier curves can exactly represent
> conic sections, and thus give you exact trig values?  You essentially
> divide one quadratic Bezier curve by another, with specifically calculated
> weights.  Fairly simple and straightforward stuff.  Not sure if the
> division is a problem for you mapping it to circuitry.
> http://demofox.org/bezquadrational.html
>
> Video cards use a handful of terms of taylor series, so that might be a
> decent approach as well since it's used in high end production circuitry.
>
>
> On Tue, Jan 19, 2016 at 10:05 AM, Theo Verelst 
> wrote:
>
>> Hi all,
>>
>> Maybe a bit forward, but hey, there are PhDs here, too, so here it goes:
>> I've played a little with the latest Vivado HLx design tools fro Xilinx
>> FPGAs and the cheap Zynq implementation I use (a Parallella board), and I
>> was looking for interesting examples to put in C-to_chip compiler that I
>> can connected over AXI bus to a Linux program running on the ARM cores in
>> the Zynq chip.
>>
>> In other words, computations and manipulations with additions, multiplies
>> and other logical operations (say of 32 bits) that compile nicely to for
>> instance the computation of y=sin(t) in such a form that the Silicon
>> Compiler can have a go at it, and produce a nice relative low-latency FPGA
>> block to connect up with other blocks to do nice (and very low latency) DSP
>> with.
>>
>> Regards,
>>
>>  Theo V.
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] automation of parametric EQ .

2015-12-21 Thread Bjorn Roche
On Mon, Dec 21, 2015 at 6:46 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
> regarding Pro Tools (which i do not own and haven't worked with since 2002
> when i was at Wave Mechanics, now called SoundToys), please take a look at
> this blog:
>
>  http://www.avidblogs.com/pro-tools-11-analog-console/
>
> evidently, for a single channel strip, there is a volume slider, but no
> "built-in" EQ, like in an analog board.  you're s'pose to insert EQ III or
> something like that.
>

Some DAWs are like that, while others have EQs built in.


> now in the avid blog, words like these are written: "... which of the 20
> EQ plug-ins should I use?... You can build an SSL, or a Neve, ..., Sonnox,
> McDSP, iZotope, MetricHalo..."
>
> so then, in your session, you mix some kinda nice sound, save all of the
> sliders in PT automation and then ask "What would this sound like if I used
> iZ instead of McDSP?", can you or can you not apply that automation to
> corresponding parameters of the other plugin?  i thought that you could.
>

I've never seen anything like that. I wonder if the industry even wants
this. Right now, if I build a protools (or other DAW) session and want to
share it with you, you have to have all the plugins I used in the session.
That's another sale for the plugin company -- unless you could substitute
other plugins easily. There are, of course, workarounds, like "freezing" a
track and so on.


> if that is the case, then, IMO, someone in some standards committee at
> NAMM or AES or something should be pushing for standardization of some
> *known* common parameters.
>

I don't really see how that would be possible in a general case. How would
you map company A's 4-band parametric that also has a high and low shelf to
Company B's 5 band parametric that has no shelves, but an air-band? What if
one company offers a greater range for Q than another company? Plugins are
supposed to be as unique as possible. That's the point.

this, on top of the generalization that Knud Bank Christensen did last
> decade (which sorta supersedes the Orfanidis correction to the digital
> parametric EQ), really nails the specification problem down:  whether it's
> analog or digital, if it's 2nd-order (and not some kinda FIR EQ), then
> there are 5 knobs corresponding to 5 coefficients that *fully* define the
> frequency response behavior of the EQ.  those 5 coefficient knobs can be
> mapped to 5 parameter knobs that are meaningful to the user.


 Can you send a reference to Christensen's work that you are referring to?

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] automation of parametric EQ .

2015-12-20 Thread Bjorn Roche
I don't believe any of this stuff is standardized in any meaningful way.
Some comments below:

On Sat, Dec 19, 2015 at 11:16 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

> can anyone point me where to find the technical information regarding how
> automation regarding control settings might be defined, particularly
> regarding the 3-knob parametric EQ.
>
One published non-standard is the plugins published by apple/core audio:
https://developer.apple.com/library/mac/documentation/MusicAudio/Conceptual/CoreAudioOverview/SystemAudioUnits/SystemAudioUnits.html#//apple_ref/doc/uid/TP40003577-CH8-SW2

For the apple plugins, the intent is to provide some basic plugins for
developers. For example, a game programmer might want to add some reverb to
a level that takes place in a cement room. To do this, they can use the
built-in reverb plugin and access the built-in parameters using certain
run-time constants. These "standards" are more for the convenience of
programmers who want to have a runtime constant to access specific
parameters -- they are not intended for other reverb manufacturers to use
the same parameters in their own plugin, or for other reverbs to act as a
drop-in replacement for apple's reverb. It's been years since I read the
full documentation, but I think apple even discourages this.

> first, other than MIDI, with automation data stored as a MIDI files, how
> else is this information stored?  like how does Pro Tools store the data?
>  essentially, i am curious exactly how a Pro Tools session (or whatever
> they call the file) stores the automation data for a 3-knob parametric EQ.
>  like exactly how is the Q or bandwidth defined.
>

Generally speaking, a program like ProTools (the "host app") will open a
plugin, and ask for a list of parameters. The plugin will respond with
something like "These are my parameters and they have the following basic
properties (id, display name, min, max, default values, etc)." Sometimes
they supply more info like "this parameter has a unit of dB/Hz/Whatever and
is measured on a log/linear scale". Theoretically, they supply enough info
for the host app to build a UI without needing the plugin to do so, and
well-built plugins do this but most commercial plugins aren't
"well-built" in that sense.

For parametric EQs, there are no standards that I'm aware of. It's up the
manufacturer of the plugin to decide what parameters they want and how they
want them to work. Each plugin can define their own min/max/default/etc for
things like center frequency, gain and Q/bandwidth. I've seen plugins with
Q defined as radio buttons, where you can pick one of five, but most new
ones allow you to select using a slider of some sort. It's possible that in
some plugin formats the plugin can tell the host "hey, this parameter is a
Q for a parametric EQ", but I'm not aware of that feature.

With that in mind, it's up to the app to store the info it gathers from the
plugin however it wants. Usually this will be as a set of initial values
followed by a list of automation changes. That data could be ported from
ProTools to another host app (and there are commercial tools that do this),
but you would need the same plugins on both systems. If you wanted to
convert from one plugin to another, you would have to understand the
specifics of each plugin. I am not aware of a program or system that
translates from one plugin to another, but there might be something that
does this.

> second, specifically, how is this data defined in a MIDI file?
>  specifically which MIDI controls (the control numbers) are, by default,
> assigned to parametric EQ?  how many can a single MIDI channel have?
>
and then how is the scaling defined?  presumably 0x40 would represent the
> middle of the knob range.  where does that go for f0?  and for Q?  i
> presume 0x40 would correspond to a dB gain of 0.
>

I don't recall any MIDI standards about this. If there are any, I doubt
anybody abides by them. That said, I'm not really up on all the areas where
this would be relevant (e.g. midi controllers might have a use for this),
so it's very possible that something exists.

> has anyone used the same MIDI file for automatic control of EQ and
> translated that MIDI file from one machine (or plugin) to another?
>
I don't believe this is possible in the general case, if at all. To the
best of my limited recollection, a lot of that information would be sent as
SYSEX-type messages, not as more portable CC-type messages.

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-20 Thread Bjorn Roche
On Sun, Sep 20, 2015 at 10:21 AM, Andrew Kelley 
wrote:

> On Fri, Sep 4, 2015 at 11:47 AM Andrew Kelley 
> wrote:
>


> A ringbuffer introduces a buffer's worth of delay. Not good for
>>> applications that require low latency. A DAW would be a better example
>>> than a reverb. No low-latency monitoring with this arrangement.
>>>
>>
>> I'm going to look carefully into this. I think you brought up a potential
>> flaw in the libsoundio API, in which case I'm going to figure out how to
>> address the problem and then update the API.
>>
>
> I think you are right that duplex streams is a missing feature from
> libsoundio's current API. Upon reexamination, it looks like it is possible
> to support duplex streams on each backend.
>

This will be a boon for libsoundio!


> I noticed that PortAudio's API allows one to open a duplex stream with
> different stream parameters for each device. Does it actually make sense to
> open an input device and an output device with...
>
>  * ...different sample rates?
>

PA certainly doesn't support this. You might have two devices open at one
time (one for input and one for output) and they might be running at
separate sample rates, but the stream itself will only have one sample rate
-- at least one device will be SR covered if necessary.


>  * ...different latency / hardware buffer values?
>

PA probably only uses one of the two values in at least some situations
like this. In fact, on OS X (and possibly on other APIs), the latency
parameter is often ignored completely anyway. (or at least it was when I
last looked at the code)


>  * ...different sample formats?
>

I don't think this is of much use to many people (anybody?). If it is, I
don't think the person who needs it would complain too much about a few
extra lines of conversion code, but maybe I'm wrong.

bjorn

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-04 Thread Bjorn Roche
This looks like a very nice effort!

On Fri, Sep 4, 2015 at 1:58 PM, Andrew Kelley  wrote:

> On Fri, Sep 4, 2015 at 10:43 AM Ian Esten  wrote:
>
>> And an observation: libsoundio has a read and a write callback. If I was
>> writing an audio program that produced output based on the input (such as a
>> reverb, for example), do I have any guarantee that a write callback will
>> only come after a read callback, and that for every write callback there is
>> a read callback?
>>
>
> I don't think that every sound driver guarantees this. I see that
> PortAudio supports this API but I think they have to do additional
> buffering to accomplish it in a cross platform manner.
>

I wrote a lot of the PortAudio code for OS X. I don't recall exactly (it
was a long time ago), but I'm pretty sure the only reason I had to use
multiple callbacks and "link" them into one PortAudio callback was to
support some very odd case, like input from one device and output to
another (even there, I think it was fine unless you did something weird,
like SR converting only one of them or something). I recall some discussion
about my having to do this that indicated that no one else had needed to do
it before.

I've worked with a few native APIs and PortAudio, and I don't think any API
natively has problems with a single callback for read and write as long as
you are using the same device. Different devices and, of course, all bets
are off. I could be mistaken -- it's been a while, and I haven't used them
all.


> If you're writing something that is reading from an input device and
> writing to an output device, I think your best bet is to use a ring buffer
> to store the input.
>

Personally, I think the single callback is extremely useful for real-time
processing -- no need for the extra latency caused by a ring buffer. With
two callbacks you could avoid the latency if both callbacks are on the same
thread and you know which is going to be called first but I suspect
that's not guaranteed either.


BTW, on this page you say that PA supports SR conversion:
https://github.com/andrewrk/libsoundio/wiki/libsoundio-vs-PortAudio

I am pretty sure PA won't do that. It does have pretty good support for
whatever SR conversion the native platform supports, but it won't do any
conversions itself.

bjorn


-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFTW Help In C

2015-06-18 Thread Bjorn Roche
oops, hit send before I was ready.

Calling malloc in the callback shouldn't cause a crash on most platforms.
The problem you have is a bit more subtle. If I understand what you're
doing, there are a few problems:

- userData isn't big enough. You want something like this (not in you
callback):

userData=malloc(sizeof(float)*NUM_SAMPS_IN_BUFFER);

- then in your callback, you want something like this:

float *inp = (float*) inputBuffer, *fftbuffer = (float *) userData;
for (i=0;i wrote:

> Calling malloc in the callback shouldn't cause a crash on most platforms.
> The problem you have is a bit more subtle and has to do with how pointers
> are interpreted. If I understand what you're doing, there are a few
> problems:
>
> - in/userData isn't big enough.
>
>
> userData=malloc(sizeof(float)*NUM_SAMPS_IN_BUFFER);
>
> float *inp = (float*) inputBuffer, *fftbuffer = (float *) userData;
> for (i=0;i
>
> On Thu, Jun 18, 2015 at 1:15 PM, Connor Gettel 
> wrote:
>
>> Hello Everyone,
>>
>> Ross, Bjorn, Danny, Richard and Bogac, Thank you for your insightful
>> feedback and advice with my project. I haven’t had time to look over all
>> the material just yet, but i surely will over the next couple days. I’ve
>> hit a bit wall with one specific part of the code, this mainly comes down
>> to syntax I think and lack of experience with C. If anyone could tell me
>> what i’m doing wrong in this instance, please let me know if i’m on the
>> right track.
>>
>> inside the memory allocations in the main function I’ve got this line:
>>
>> in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*N);
>>
>> This is my input array, i need to fill it with data from portaudio’s
>> inputBuffer (I think).
>>
>> As far as I know the way to do this is to use the *userData Parameter of
>> the callbaack which is type void*.
>>
>> So now I want to make ‘in’ the userData Parameter… i need to cast inside
>> the callback from double to void*, and then to float to match the
>> inputBuffer… I’ve done this with:
>>
>> int i;
>> double *in;
>> in = malloc(sizeof(double));
>> userData=(void*)in;
>> userData=in;
>> float *inp = (float*) inputBuffer, *fftbuffer = (float *) userData;
>> for (i=0;i>
>> So my thought process now is that inputBuffer should be feeding the
>> fftbuffer (which is the input array of the fft) with data… which means I
>> can now execute my plan…
>>
>> Compiling fine! Unfortunately i’m crashing. I’m pretty sure it’s because
>> i’m calling malloc in the callback. Which isn’t meant to be done
>> (obviously).
>>
>> Question are:
>> 1) Am I on the right track at all?
>> 2) Is there a malloc free way to cast from double to void* ?
>> 3) Am I right in thinking the ‘in’ is the FFT Buffer?
>>
>> Cheers,
>>
>> Connor.
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book reviews,
>> dsp links
>> http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>
>
>
> --
> Bjorn Roche
> @shimmeoapp
>



-- 
Bjorn Roche
@shimmeoapp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFTW Help In C

2015-06-18 Thread Bjorn Roche
Calling malloc in the callback shouldn't cause a crash on most platforms.
The problem you have is a bit more subtle and has to do with how pointers
are interpreted. If I understand what you're doing, there are a few
problems:

- in/userData isn't big enough.


userData=malloc(sizeof(float)*NUM_SAMPS_IN_BUFFER);

float *inp = (float*) inputBuffer, *fftbuffer = (float *) userData;
for (i=0;i wrote:

> Hello Everyone,
>
> Ross, Bjorn, Danny, Richard and Bogac, Thank you for your insightful
> feedback and advice with my project. I haven’t had time to look over all
> the material just yet, but i surely will over the next couple days. I’ve
> hit a bit wall with one specific part of the code, this mainly comes down
> to syntax I think and lack of experience with C. If anyone could tell me
> what i’m doing wrong in this instance, please let me know if i’m on the
> right track.
>
> inside the memory allocations in the main function I’ve got this line:
>
> in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*N);
>
> This is my input array, i need to fill it with data from portaudio’s
> inputBuffer (I think).
>
> As far as I know the way to do this is to use the *userData Parameter of
> the callbaack which is type void*.
>
> So now I want to make ‘in’ the userData Parameter… i need to cast inside
> the callback from double to void*, and then to float to match the
> inputBuffer… I’ve done this with:
>
> int i;
> double *in;
> in = malloc(sizeof(double));
> userData=(void*)in;
> userData=in;
> float *inp = (float*) inputBuffer, *fftbuffer = (float *) userData;
> for (i=0;i
> So my thought process now is that inputBuffer should be feeding the
> fftbuffer (which is the input array of the fft) with data… which means I
> can now execute my plan…
>
> Compiling fine! Unfortunately i’m crashing. I’m pretty sure it’s because
> i’m calling malloc in the callback. Which isn’t meant to be done
> (obviously).
>
> Question are:
> 1) Am I on the right track at all?
> 2) Is there a malloc free way to cast from double to void* ?
> 3) Am I right in thinking the ‘in’ is the FFT Buffer?
>
> Cheers,
>
> Connor.
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp




-- 
Bjorn Roche
@shimmeoapp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FFTW Help in C

2015-06-12 Thread Bjorn Roche
Ross and Conner,

It's absolutely true that each callback must respond in a certain amount of
time. The maximum execution time of each callback is something you need to
consider. Ross' language is more precise, but maybe some arm-wavy examples
will help:

- If each callback does the same amount of work (e.g., each call does an
FFT on all the incoming data), then each callback likely takes about the
same amount of time. In this case, moving to another thread *probably*
won't help (although, depending on the system, it might, especially if you
are anywhere near the allotted time you have for each callback).
- If some callbacks do different amounts of work (e.g., filling a buffer
and doing one FFT every three or four callbacks), then the maximum time is
much larger than the average. In this case, passing the audio to another
thread and doing the work there will likely help, because the additional
buffering helps to "spread the work around".

bjorn

On Fri, Jun 12, 2015 at 4:25 AM, Ross Bencina 
wrote:

> Hey Bjorn, Connor,
>
> On 12/06/2015 1:27 AM, Bjorn Roche wrote:
>
>> The important thing is to do anything that might take an unbounded
>> amount of time outside your callback. For a simple FFT, the rule of
>> thumb might bethat all setup takes place outside the callback. For
>> example, as long as you do all your malloc stuff outside the
>> callback, processing and soon can usually be done in the callback.
>>
>
> All true, but Connor may need to be more careful than that. I.e. make sure
> that the amount of time taken is guaranteed to be less than the time
> available in each callback.
>
> An FFT is not exactly a free operation. On a modern 8-core desktop
> machine it's probably trivial to perform a 2048-point FFT in the audio
> callback. But on a low-powered device, a single FFT of large enough size
> may exceed the available time in an audio callback. (Connor mentioned
> Raspberry Pi on another list).
>
> The only way to be sure that the FFT is OK to run in the callback is to:
>
> - work out the callback period
>
> - work out how long the FFT takes to compute on your device and how many
> you need to coompute per-callback.
>
> - make sure time-to-execute-FFT << callback-period (I'd aim for below
> 75% of one callback period to execute the entire FFT). This is not
> something that can be easily amortized across multiple callbacks.
>
>
> The above also assumes that your audio API lets you use 100% of the
> available CPU time within each callback period. A safer default
> assumption might be 50%.
>
> Remember that that your callback period will be short (64 samples) but
> your FFT may be large, e.g. 2048 bins. In such cases you have to perform
> a large FFT in the time of a small audio buffer period.
>
> If the goal is to display the results I'd just shovel the audio data
> into a buffer and FFT it in a different thread. That way if the FFT
> thread falls behind it can drop frames without upsetting the audio
> callback.
>
> The best discussion I've seen about doing FFTs synchronously is in this
> paper:
>
> "Implementing Real-Time Partitioned Convolution Algorithms on
> Conventional Operating Systems"
> Eric Battenberg, Rimas Avizienis
> DAFx2011
>
> Google says it's available here:
>
> http://cnmat.berkeley.edu/system/files/attachments/main.pdf
>
> If anyone has other references for real-time FFT scheduling I'd be
> interested to read them.
>
> Cheers,
>
> Ross.
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FFTW Help in C

2015-06-11 Thread Bjorn Roche
The important thing is to do anything that might take an unbounded amount
of time outside your callback. For a simple FFT, the rule of thumb might be
that all setup takes place outside the callback. For example, as long as
you do all your malloc stuff outside the callback, processing and so on can
usually be done in the callback.

For some sample code (not using FFTW, but it's similar):

http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html


On Thu, Jun 11, 2015 at 10:20 AM, Connor Gettel  wrote:

> Hello Everyone,
>
> My name’s Connor and I’m new to this mailing list. I was hoping somebody
> might be able to help me out with some FFT code.
>
> I want to do a spectral analysis of the mic input of my sound card. So far
> in my program i’ve got my main function initialising portaudio,
> inputParameters, outputParameters etc, and a callback function above
> passing audio through. It all runs smoothly.
>
> What I don’t understand at all is how to structure the FFT code in and
> around the callback as i’m fairly new to C. I understand all the steps of
> the FFT mostly in terms of memory allocation, setting up a plan, and
> executing the plan, but I’m still really unclear as how to structure these
> pieces of code into the program. What exactly can and can’t go inside the
> callback? I know it’s a tricky place because of timing etc…
>
> Could anybody please explain to me how i could achieve a real to complex 1
> dimensional DFT on my audio input using a callback?
>
> I cannot even begin to explain how grateful I would be if somebody could
> walk me through this process.
>
> I have attached my callback function code so far with the FFT code
> unincorporated at the very bottom below the main function (should anyone
> wish to have a look)
>
> I hope this is all clear enough, if more information is required please
> let me know.
>
> Thanks very much in advance!
>
> All the best,
>
> Connor.
>
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Glitch/Alias free modulated delay

2015-03-20 Thread Bjorn Roche
gt; dupswapdrop -- the music-dsp mailing list and website:
>>> subscription info, FAQ, source code archive, list archive, book reviews,
>>> dsp links
>>> http://music.columbia.edu/cmc/music-dsp
>>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>>
>>
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book reviews,
>> dsp links
>> http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book reviews,
>> dsp links
>> http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
Bjorn Roche
@shimmeoapp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Statistics on the (amplitude) FFT of "White Noise"

2014-10-31 Thread Bjorn Roche
There is a theorem that goes something like this:

If you have white noise expressed in one orthonormal basis, and you
transform it to another orthonormal basis, the result will still be white
noise.

The phrasing of that is obviously imprecise, but the point is this: since
the time and Fourier domains are both orthonormal bases of band-limitted
functions, you can conclude that your FFT of white noise will also be
distributed like white noise. This allows us to define white noise in
multiple ways, the way the wikipedia article does.

However, white noise created in the time domain can be created using any
probability density function (PDF). For example, Gaussian white noise uses
the normal distribution and uniform white noise uses the uniform
distribution, but they both produce white noise as long as certain
conditions are met (e.g., the samples are independent). I am not sure if
the PDFs are preserved across transforms from one orthonormal basis to
another, and the answer to your question would depend on that (Of course it
would also depend on several other parts of the phrasing of your question
that aren't clear to me). My intuition is that PDFs are preserved across
such transforms.

bjorn


On Fri, Oct 31, 2014 at 1:06 PM, Theo Verelst  wrote:

>
>
> Hi music DSpers,
>
> Maybe running the risk of starting a Griffin-Gate,
> but one more consideration for the people interested in
> keeping the basics of digital processing a bit pure, and
> maybe to learn a thing or two for those working and/or
> hobby-ing around in the field.
>
> Just like there is some doubt cast on the Wikipdia page on
> the white noise subject ( http://en.wikipedia.org/wiki/White_noise )
> I put quotes around the concept, because if we're talking the frequency
> transform usually implied by Fast Fourier Transform, we're talking
> sampled signals, so we need to make some assumptions about
> how to satisfy the sampling theorem if we start from the
> normal Information Theory and Physics interpretation of
> continuous white noise signals. I suppose the assumption is
> that if you take random numbers, somehow limited to the maximum
> amplitude of the samples you use, each sample an uncorrelated
> random number, you have some form of digital "white noise" that
> can be related to the more general concepts.
>
> Now we take such a signal, or a sampled signal from a continuous
> (more interesting !) white noise with some form of frequency
> limitation (creates correlation in most cases) or signal assumption
> and sample and hold perfection, to make the FFT transform act on
> a contiguous set of the properly obtained white noise signal.
> Say we're only taking one length of the FFT transform, and are only
> interested in the volume of the various output "bins".
>
> Now, how probable is it that we get "all equal" frequency amounts as
> the output of the this FFT transform (without regarding phase), taking
> for instance 256 or 4096 bins, and 16 bits accuracy ?! Or, how long would
> we have to average the bin values to end up equal (and what sort
> of entropy would that entail)?
>
> T.V.
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
-
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] #music-dsp chatroom invitation

2014-10-07 Thread Bjorn Roche
I was just on a call with someone who researches hearing and
psychoacoustics. He happened to mention gamma tone filters, which I had
never heard of. I may have misunderstood, since it was a tangent, but I
believe he said it's a commonly used model for the hearing system.

http://en.wikipedia.org/wiki/Gammatone_filter

On Tue, Oct 7, 2014 at 5:28 PM, Charles Z Henry  wrote:

> On Tue, Oct 7, 2014 at 4:11 PM, Peter S 
> wrote:
> > On 07/10/2014, Charles Z Henry  wrote:
> >> watch yer tone, Peter we're not idiots, because we didn't happen
> >> to write the great wikipedia definition of a filterbank.  Be a bit
> >> more flexible--this is fun :D
> >
> > Irregardless of Wikipedia, for me the word 'filterbank' does not imply
> > linearity How is that even meant? As - linearly spaced in frequency,
> > or as f(a)+f(b)=f(a+b)? If the first, then a logarithmically spaced
> > array of filters (like an equalizer) is not even considered a
> > 'filterbank'? And how are the cochlear filter bands not 'structurally
> > the same'? How do you even define that for a biological organ? Aren't
> > the hair cells made of the same structure? (Structurally same cells,
> > cell walls, mitochondria, etc.) How is that not 'structurally the
> > same' ?
> >
> > What if I make an array of filters where at some filters I use SVF
> > filters, and at other filters I use biquads? Is that NOT a filterbank,
> > because the filters are not 'structurally the same'?
> >
> > Kinda nonsense...
>
> Yep...
>
> A tangent on a conversation that could have been filed as 'subtle
> distinctions on model choice'.  Blind men and the elephant.  Get back
> to task--what is it you're trying to describe and for what reason?
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
-
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Filtering out unwanted square wave (Radio: DCS/DPL signal)

2014-07-30 Thread Bjorn Roche
Thanks for all the info so far. I should have been more careful when I said DCS
is a square wave. It's probably more accurately described as an NRZ code.
Nevertheless, these suggestions are very useful.

thanks

bjorn


On Wed, Jul 30, 2014 at 2:36 PM, Bogac Topaktas  wrote:

> The most efficient way is to use adaptive noise cancellation. See:
>
>
> http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2008/rmo25_kdw24/rmo25_kdw24/index.html
>
> http://www.dsprelated.com/showmessage/5838/5.php
>
> http://www.cs.cmu.edu/~aarti/pubs/ANC.pdf
>
> It works perfectly for removing 50/60Hz hum from single coil
> pickups of electric instruments, where static notch filtering
> is not adequate (see first reference above).
>
> In the worst case, i.e., if you can't construct a perfect cancellation
> signal, you may recover spoken words with a robust speech recognizer and
> then synthesize a clean speech afterwards.
>
> On Tue, July 29, 2014 6:32 pm, Bjorn Roche wrote:
> > Hey all,
> >
> >
> > I'm dealing with a non-music but still audio-related DSP issue. I need
> > to remove a DPL/DCS signal from a recording. Roughly speaking, a DCS
> signal
> > is a low frequency (67.15Hz) square wave sent at the same time, over the
> > same carrier, as speech. Because it is a square wave, it has many strong
> > harmonics that overlap with speech. Obviously, the speech must be
> > preserved as well as possible and the goal is to reject the DCS as much
> as
> > possible because it's annoying as all get-out.
> >
> > On the surface, this seems like a problem that might be solved the same
> > way as removing 60Hz power-line noise: lots of notch filters. However,
> > power-line noise tends to be weaker and comes from a source that is much
> > closer to a sine wave.
> >
> > So, my question is: is there a better way to do this? (preferably
> > something someone has experience with)
> >
> > This link contains more info about DCS:
> > http://onfreq.com/syntorx/dcs.html It
> > mentions "Since DCS creates audio harmonics well above 300 Hz (i.e. into
> > the audible portion of the band from 300 to 3000 Hz), radios must have
> > good filters to remove the unwanted DCS noise." Ha! I've asked this also
> on
> > stack exchange here:
> >
> http://dsp.stackexchange.com/questions/17462/filtering-out-unwanted-squar
> > e-wave-radio-dcs-dpl-signal
> >
> > TIA!
> >
> >
> > bjorn
> >
> > --
> > -
> > Bjorn Roche
> > bjornroche.com <http://blog.bjornroche.com> @xonamiaudio
> > --
> > dupswapdrop -- the music-dsp mailing list and website: subscription info,
> > FAQ, source code archive, list archive, book reviews, dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> >
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
-
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Filtering out unwanted square wave (Radio: DCS/DPL signal)

2014-07-29 Thread Bjorn Roche
Hey all,

I'm dealing with a non-music but still audio-related DSP issue. I need
to remove
a DPL/DCS signal from a recording. Roughly speaking, a DCS signal is a low
frequency (67.15Hz) square wave sent at the same time, over the same
carrier, as speech. Because it is a square wave, it has many strong
harmonics that overlap with speech. Obviously, the speech must be preserved
as well as possible and the goal is to reject the DCS as much as possible
because it's annoying as all get-out.

On the surface, this seems like a problem that might be solved the same way
as removing 60Hz power-line noise: lots of notch filters. However,
power-line noise tends to be weaker and comes from a source that is much
closer to a sine wave.

So, my question is: is there a better way to do this? (preferably something
someone has experience with)

This link contains more info about DCS: http://onfreq.com/syntorx/dcs.html It
mentions "Since DCS creates audio harmonics well above 300 Hz (i.e. into
the audible portion of the band from 300 to 3000 Hz), radios must have good
filters to remove the unwanted DCS noise." Ha!
I've asked this also on stack exchange here:
http://dsp.stackexchange.com/questions/17462/filtering-out-unwanted-square-wave-radio-dcs-dpl-signal

TIA!

bjorn

-- 
-----
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DFT by Simultaneous Equations

2014-07-08 Thread Bjorn Roche
I have to agree with Charles. That book is useful because it's available
free online, but there are other resources.

At the risk of taking you away from the theory, you might find this blog
post I wrote a while ago useful. It's on frequency detection, but many of
the same concepts apply and it comes with source code:
http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html

Note the blog post won't answer your question about how many bins there are
(the post is a bit sloppy about that), but working through the code might
help.


On Tue, Jul 8, 2014 at 11:05 AM, Charles Z Henry  wrote:

> On Tue, Jul 8, 2014 at 9:47 AM, Filipe Pereira
>  wrote:
>
> > Am I misreading something or is there a mistake in this explanation?
> Should
> > it be 2 groups of N equations - N for the real and N for the imaginary
> part
> > of the frequency domain?
>
> It would be a dis-service to your education to answer such a question.
> The textbook is not especially friendly.  Research some other sources.
> I like the Wickerhauser books on Fourier and Wavelets.
>
> The DFT is perhaps harder in some respects than the continuous Fourier
> transform.  Understanding the math with integral calculus, up to
> Hilbert Spaces and operators (of which the Fourier transform is one
> kind), will take you much further.
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
-
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a weird but salient, LTI-relevant question

2014-05-07 Thread Bjorn Roche
I've never heard this phenomenon myself, but I am familiar with it. It is a
psychoacoustic phemonen, and I've heard it referred to as "choo-chooing",
though when I just googled for that I got nothing related, so maybe that's
just a colloquial term amongst the engineers I know. I've never come across
it in any of textbooks on the subject, though I'm sure there are papers
written about it.

Some early hardware dithers found it cheaper to store a table of numbers in
ROM than to calculate random numbers with a PRNGs [1]. Because memory was
so expensive at the time, the length of the loop was chosen to be just long
enough to avoid the "Choo-Choo"ing phenomenon. If I remember correctly from
what the designer of one of those dithers told me, the length of repeat
that causes choo-chooing is very nearly the same for everyone, so it was
pretty each to choose the appropriate loop length. It would be cool to have
a WebAudio demonstration and/or test of this, as Chinmay Pendharkar
suggested.

bjorn

[1] You may have noticed early dithers made vague and strange marketing
claims. Like, "this is not a real dither, but a signal", or "not a dither,
but a bitmapping/bit-reduction scheme" or that they were somehow different
from or better than dither, even though the effect is exactly the same (or,
if anything, arguably worse, because no engineer with access to a halfway
decent PRNG would use a LUT). How the marketing departments of these
companies managed to turn the liability of not having a PRNG into an asset
is just proof that the marketing departments of these companies are a bunch
of mindless jerks who'll be the first against the wall when the revolution
comes.



On Wed, May 7, 2014 at 10:10 PM, Eden Sherry  wrote:

> Should standardize the sampling rate as well. With an infinite sampling
> rate and your method, you'd have something like a pure broadband "tone",
> right?
>
> > On May 7, 2014, at 5:39 PM, Sampo Syreeni  wrote:
> >
> > This is going to sound pretty weird, I'm sure, but could as many people
> on-list perform the following experiment on themselves and their close
> ones, as possible? Then report back (privately, so as not to ruin the
> surprise for everybody else?)
> >
> > Take a long (at least 30 seconds and possibly more) sequence of truly
> random (AWGN) noise, either from a very long period PRNG or from a primary
> randomness source. Then starting with very long periods of over 10 seconds,
> loop the noise, curtailing the period of repetition. Dropping it, say,
> 200ms at a time at first, and in the end perhaps something like 10ms at a
> time. When does your ear, perceptually speaking, start to say that the
> noise repeats? Precisely?
> >
> > I'd be interested in hearing what people on-list have to say about this
> one. Especially the ones who are curious enough to find the precise limit
> in milliseconds, and even subject their loved ones to the test.
> >
> > Because, I mean, at least for me this was a total mindfuck, and if you
> analyze it e.g. via the usual LTI theory of human hearing, the results do
> not make any sense at all. I think, but I'm not too sure. Whence the
> question. ;)
> > --
> > Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> > +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>



-- 
-
Bjorn Roche
bjornroche.com <http://blog.bjornroche.com>
@xonamiaudio
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DSP/Plug-in Developers in the Big Apple

2014-01-03 Thread Bjorn Roche
Go through the list of demos from previous presentations here and you'll find 
lots on companies in NYC:

http://www.meetup.com/music-techster/

This is also a fun group. You never really know who's going to show up:

http://monthlymusichackathon.org/


Other meetups worth checking out:

http://www.meetup.com/digitalmusicny/
http://www.meetup.com/NYC-Spotify-Tech-Group/
http://www.meetup.com/musicstartups/

bjorn



On Jan 3, 2014, at 4:37 AM, Richard Dobson  
wrote:

> Might be relevant in this context:
> 
> Michael Gogins hosts a Csound users group in his NY apartment usually on the 
> first Thursday of the month. See e.g.:
> 
> http://michaelgogins.tumblr.com/CsoundVST
> 
> As Csound is now available for iOS and Android as well as for desktop 
> platforms, it can be used for general audio dsp design, and of course has a 
> huge and ever expanding repertoire of opcodes, etc.
> 
> Richard Dobson
> 
> On 03/01/2014 01:40, Douglas Repetto wrote:
>> Well, the music-dsp list lives in New York City! And all of the
>> universities in the city have active computer music programs. Plus there
>> are lots of freelance developers and several audio-oriented startups. I'd
>> recommend checking out the various music hack days, hacker spaces, dorkbot,
>> experimental music venues, etc. All sorts of interesting audio fun
>> happening in NYC!
>> 
>> 
>> best,
>> douglas
>> 
>> 
>> 
>> On Thu, Jan 2, 2014 at 2:01 PM, Price Smith wrote:
>> 
>>> Hey guys. My name is Price. I recently graduated from Berklee and am making
>>> the move to New York City this January. I'm a novice plug-in/DSP developer
>>> and would love to connect with others in the field who are also in the
>>> city. I've looked a good bit and have only found that the company Sample
>>> Logic is hq'd there. Does anyone happen to know of any other person(s) or
>>> company/companies stationed in New York, NY? Thanks a million and happy
>>> holidays!
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] HTML

2013-08-08 Thread Bjorn Roche

On Aug 8, 2013, at 1:55 PM, Ian Esten wrote:

> I would be OK with forcing the list to be html free, as long as I got
> a notification that my emails were refused because they were in html.
> I have sent messages to the list several times and been surprised that
> I didn't get a response!

Couldn't agree more.

bjorn

-----
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com
@xonamiaudio

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] basic trouble with signed and unsigned types

2013-05-01 Thread Bjorn Roche
Not sure I completely understand what you are after, but I think most audio 
folks don't use types like int, long and short, but rather types like int32_t 
from types.h (or sys/types.h). There are various guarantees made about the 
sizes of those types that you can look up.

Also, I assume you've given C++ templates and operator overloading 
consideration.

bjorn


On May 1, 2013, at 4:35 PM, Sampo Syreeni wrote:

> For the longest time I took out a compiler and started cranking out an old 
> idea. In that vein, I'm using libsndfile and its (highly reasonable) 
> processing model: you just keep everything to zero padded ints (preferably 
> signed) and go from there.
> 
> The trouble is that my code is of the kind that also requires lots of bit 
> twiddling. My current problem comes from trying to make the code more or less 
> adaptive to any bit width, while I also have to do stuff like computed shifts.
> 
> So, how do you go about systematically and portably implementing what you 
> would expect from your logical operations, using standard C operations, 
> without knowing the basic width of your types? (Logical, not arithmetic) 
> right shifts of signed quantities, efficient parity, and computed shifts with 
> negative offsets are proving particularly nasty at the moment. (It has to do 
> with dithering at arbitrary word length which also has to be reasonably 
> efficient if any set in silicon.)
> -- 
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com
@xonamiaudio

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Overlap-add settings for pitch detection?

2013-01-22 Thread Bjorn Roche
Pitch detection using FFT is probably not the best way to do it, but if you are 
going to, here's a tutorial to get you started with windowing and frame size. 
For overlap, the more often you do it the faster you'll be able to respond (to 
a point). My tutorial does not do any overlapping.

http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html

There are some other tips in there as well as a simple C implementation of a 
guitar tuner.

bjorn

On Jan 22, 2013, at 9:23 AM, Danijel Domazet wrote:

> Hi music dsp,
> In order to implement accuarate pitch detection we are sending input signal
> through Fourier analysis stage. Are there any recommended settings with
> regards to:
> - Window frame size? 
> - Window overlap factor?
> - Window type (Hamming, Hann, etc.)? 
> 
> 
> Thanks. 
> 
> Danijel Domazet
> LittleEndian.com
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
Check us out at NAMM booth #1002
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com
@xonamiaudio






--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-10 Thread Bjorn Roche

On Dec 10, 2012, at 12:35 PM, robert bristow-johnson wrote:

> On 12/10/12 11:18 AM, Bjorn Roche wrote:
>> On Dec 10, 2012, at 4:41 AM, Alessandro Saccoia wrote:
>> 
>>>> I don't think you have been clear about what you are trying to achieve.
>>>> 
>>>> Are you trying to compute the sum of many signals for each time point? Or 
>>>> are you trying to compute the running sum of a single signal over many 
>>>> time points?
>>> Hello, thanks for helping. I want to sum prerecorded signals progressively. 
>>> Each time a new recording is added to the system, this signal is added to 
>>> the running mix and then discarded so the original source gets lost.
>>> At each instant it should be possible to retrieve the mix run till that 
>>> moment.
>>> 
>> I see. I think you'll want to go with my first suggestion:
>> 
>> 1
>> Y =   * ( X+  X+ . X   )
>> N   12  N
>> 
>> 
>> But only do the division when you "retrieve". In other words, store NY:
>> 
>> NY = ( X+  X+ . X   )
>>12  N
> 
> 
> just a quick note, Bjorn.  consider viewing your ASCII math with a 
> fixed-width font.  we cannot all guess at what proportional-width font you 
> might have been using, but if everyone uses a fixed-width font (for ASCII 
> math or ASCII art), characters line up and you can read the symbols.

Oops.

>  i use a combination of TeX-like constructs (like X_0, X_1, or y^2, etc) and 
> positioning (like i don't like to use the LaTeX construct for a summation or 
> integral for ASCII math).  also i use uppercase for either transformed (freq 
> domain) signals or important constants (like N).  so i might express your 
> first equation like:
> 
>N
>    y  =  1/N  SUM{ x_i }
>   i=0
> 
> or
> 
>   N
>y[n]  =  1/N  SUM{ x_i[n] }
>  i=0


Sure, that's clearly better, but I was following the conventions of the OP.

bjorn

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-10 Thread Bjorn Roche

On Dec 10, 2012, at 4:41 AM, Alessandro Saccoia wrote:

>> 
>> I don't think you have been clear about what you are trying to achieve.
>> 
>> Are you trying to compute the sum of many signals for each time point? Or 
>> are you trying to compute the running sum of a single signal over many time 
>> points?
> 
> Hello, thanks for helping. I want to sum prerecorded signals progressively. 
> Each time a new recording is added to the system, this signal is added to the 
> running mix and then discarded so the original source gets lost. 
> At each instant it should be possible to retrieve the mix run till that 
> moment.
> 

I see. I think you'll want to go with my first suggestion:

1
Y =   * ( X+  X+ . X   )
N   12  N


But only do the division when you "retrieve". In other words, store NY:

NY = ( X+  X+ . X   )
   12  N

If it proves necessary, you can do one of Brad Smith's suggestions for adding 
floats. Unfortunately, R B-J's suggestion won't work here since you are dealing 
with floats, not ints.

On Dec 10, 2012, at 12:31 AM, Ross Bencina wrote:

> On 10/12/2012 1:47 PM, Bjorn Roche wrote:
>>>> There is something called "double double" which is a software 128
>>>> bit floating point type that maybe isn't too expensive.
>>>> 
>> "long double", I believe
> 
> No. "long double" afaik usually means extended precision, as supported in 
> hardware by the x86 FPU, and is 80 bits wide. It is not supported by some 
> compilers at all (eg MSVC) since they tend to target SSE rather than x86 FPU.
> 
> double double is something else:
> 
> http://en.wikipedia.org/wiki/Double-double_arithmetic#Double-double_arithmetic

Ah, good to know. Thanks!

bjorn

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Bjorn Roche
On Dec 9, 2012, at 8:18 PM, Alessandro Saccoia wrote:

> That is really interesting, but I can't see how to apply the Kahan's 
> algorithm to a set of signals. 
> In my original question, I was thinkin of mixing signals of arbitrary sizes.
> I could relax this requirement, and forcing all the signals to be of a given 
> size, but I can't see how a sample by sample summation, where there are M 
> sums (M the forced length of the signals) could profit from a running 
> compensation.
> Also, with a non linear operation, I fear of introducing discontinuities that 
> could sound even worse than the white noise I expect using the simple 
> approach..

Brad's suggestions are good, but it makes me wonder: if you have so many inputs 
that the numerical error of a naive approach starts to matter, what's the 
result? Are you really going to hear something after adding that many signals 
together? If they are independent of each other, then the result of so many 
signals will be noise. Otherwise, the signals must have something (significant) 
in common, or most of them must be zero most of the time. In the latter case, 
these algorithms won't make much difference (although I would still not 
recommend your original proposal).

I am a bit confused on another point: How are you getting your data: one sample 
at a time (one from each incoming channel) or one entire signal at a time? Or 
something else? If you have the whole signal stored in memory (or hard disk) 
you have the luxury of solving this however you like. If the data is coming in 
in real-time, then you know how many signals there are at any given time. If 
you want to add another signal to an existing sum, then just divide when you 
take the output.


On Dec 9, 2012, at 9:28 PM, robert bristow-johnson wrote:

> 
> um, a sorta dumb question is, if you know that all signals are mixed with 
> equal weight, then why not just sum the fixed-point values into a big long 
> word?  if you're doing this in C or C++, the type "long long" is, i believe, 
> 64 bits.  i cannot believe that your sum needs any more than that.  is N 
> greater than 2^32?  are your original samples more than 32 bits?

In modern C/C++, if you need a type guaranteed to hold 64 bits, include 
"types.h" (I believe) and use int64_t.

On Dec 9, 2012, at 9:39 PM, Ross Bencina wrote:

> There is something called "double double" which is a software 128 bit 
> floating point type that maybe isn't too expensive.
> 

"long double", I believe

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-09 Thread Bjorn Roche

On Dec 9, 2012, at 2:33 PM, Alessandro Saccoia wrote:

> Hi list, 
> given a large number of signals (N > 1000), I wonder what happens when adding 
> them with a running sum Y.
> 
> 1N - 1
> Y = - * X + ( ---) * Y
> N  N
> 

Yes, your intuition is correct: this is not a good way to go, although how bad 
it is depends on your datatype. All I can say here is that this formula will 
result in N roundoff errors for one of your signals, N-1 for another, and so on.

You might *need* to use this formula if you don't know N in advance, but after 
processing the first sample, you will know N, (right?) so I don't see how that 
can happen.


When you do know N in advance, it would be better to:

 1
Y =   * ( X+  X+ . X   )
 N   12  N

or

 1 1 1
Y =   * X+ --- X+ . + --- X
 N 1 N2  N N

Exactly which is better depends on your datatype (fixed vs floating point, 
etc). If you are concerned about overflow, the latter is better. For 
performance, the former is better. For precision, without thinking too 
carefully about it I would think the former is better, but, obviously, not in 
the presence of overflow.

bjorn

> Given the limited precision, intuitively something bad will happen for a 
> large N.
> Is there a better method than the trivial scale and sum to minimize the 
> effects of the loss of precision?
> If I reduce the bandwidth of the inputs signals in advance, do I have any 
> chance of minimizing this (possible) artifacts?
> Thank you
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-19 Thread Bjorn Roche

On Nov 19, 2012, at 4:52 AM, Chris Cannam wrote:

> On 18 November 2012 22:24, Bjorn Roche  wrote:
>> Great. I guess that means LADSPA does not use the usual [-1,1] range.
> 
> LADSPA doesn't enforce anything -- it's really up to the host. But the
> spec in header does say "For audio it is generally assumed that 1.0f
> is the `0dB' reference amplitude and is a `normal' signal level."


Interesting. Thanks. Shashank, maybe you can figure out why your code was 
hitting values so much higher than that. there may be some sort of scaling 
going on somewhere in code you didn't show us.

bjorn

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-18 Thread Bjorn Roche
On Nov 18, 2012, at 2:33 PM, Shashank Kumar (shanxS) wrote:

> @ Bjorn:
> 
> Yes, you are right. What I thought was scaling is actually clipping. I
> removed it and it worked.
> Here is the o/p: 
> http://trystwithdsp.wordpress.com/2012/11/19/basic-lpf-part-2/

Great. I guess that means LADSPA does not use the usual [-1,1] range. There's 
nothing really wrong with that, I used to not use the 16-bit integer range in 
some of my software even for floating point, but that's an unusual choice.

> And I have mentioned you and linked that to your blog. If you want me
> to link it to some other page let me know. I'll be more than happy to
> do it. Thanks :)

Glad to help.

> I have one more question:
> Why so many people use analog prototypes to get a digital filter ? Why
> not just put a few constraints on location of poles/zeros on Z plane
> and get done with it ?


This is a really great question. One answer is that analog filter design is a 
highly developed art, and therefore serves as an excellent starting point. 
Anecdote: I know one guy who's a DSP genius. He sent me some designs and told 
me he did them "directly in the digital domain". He must've forgotten that his 
starting point was a set of prototypes that came, ultimately, from analog 
filters. Analog prototypes make great digital filters if you take care with a 
few things.

All that said, some folks have started to think along the same lines as you. 
After all 1. there may be unique digital solutions (and I'm not just talking 
about FIR filters), and 2. you should be able to learn digital filter design 
without also having to learn analog filter design. To that end, here is one 
interesting paper: 
http://www.elec.qmul.ac.uk/people/josh/documents/Reiss-2011-TASLP-ParametricEqualisers.pdf

As a side note, there are, indeed, problems associated with filters designed 
with an analog prototype. For example, let's say you design a bell filter in 
the analog domain, and map it into the digital domain with a sample rate of, 
say 44100 Hz. Let's further assume that in the analog domain, the gain at the 
niquist frequency of this filter is 1 dB. That means the filer will boost 20 
kHz by 1 dB. When you use the bilinear transform, though, the resulting filter 
will have a gain a the niquist frequency of zero. This is usually not a serious 
problem, and often not a problem at all. However, if you are designing a 
parametric EQ, the difference will be noticeable at extreme settings. You could 
say this is one reason to do audio production at sample rates > 44100 Hz but 
there are arguments both ways there as well.

bjorn


-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-18 Thread Bjorn Roche

On Nov 18, 2012, at 1:23 AM, robert bristow-johnson wrote:

> On 11/18/12 12:00 AM, Ross Bencina wrote:

>> You might find these resources helpful:
>> 
>> A. RBJ's EQ cookbook has equations for the filter you want:
>> 
>> http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
>> 

I've found from following stack overflow that many programmers can't get from 
this cookbook of instructions to actually writing the code for a filter without 
a little help, so I wrote a blog post that should help a bit:

http://blog.bjornroche.com/2012/08/basic-audio-eqs.html

I mention in the beginning that second order filters such as those described by 
RBJ are the workhorse of audio and first order filters often don't do much or 
aren't used too often. The original filter that shashank kumar brought up in 
this thread was first order.

bjorn

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] stuck with filter design

2012-11-17 Thread Bjorn Roche
I don't have all the answers for you, but I have some comments after taking a 
real quick look at your blog posts:


Sonically, the results I hear sound like they are coming from something 
non-linear. Looking at the code you posted, the only non-linear thing I see is:

 // scaling the output
 if (out > 17000)
 out = 17000;
 if (out < -15000)
 out = -15000;

Two points:

1. this isn't "scaling" it's clipping. scaling is linear, clipping is 
non-linear.
2. I wouldn't think this is the issue since usually floating point audio is in 
the [-1,1] range, so you should never get to -15000/17000, but maybe LADSPA is 
different. But if that's the case, what's it doing there? Something is fishy 
here, anyway.

Another 2 points:

1. In your code here:
 // convolution
 out = a0*sig[i] + a1*hz1 + a2*hz2 - b1*hp1 - b2*hp2;
that comment shouldn't read "convolution". It's really a difference equation. I 
know you got this code from elsewhere, but I thought I'd point that out.

2. To gain a bit more intuition of what's going on, take your values:
a0 = .5; a1 = .5; a2 = 0; b1 = 0; b2 = 0;
and plug them in and simplify. With all those zeros, you will see what's going 
on in another way:
out = a0*sig[i] + a1*hz1 + a2*hz2 - b1*hp1 - b2*hp2;
becomes
out = .5*sig[i] + .5*hz1 ;

Since hz1 is the last input, you have a moving average filter: each output 
represents the average of the current input and the prior input. This will 
reenforce low frequencies which change slowly and cancel out high frequencies 
which change quickly. I can tell from that, that this is a low-pass filter. 
It's hard to get that kind of intuition for more complex filters, but in this 
case it works, and should help to show you that what you are doing with poles 
and zeros is probably right (I didn't go through it carefully, but it did look 
right).

bjorn

 

On Nov 17, 2012, at 4:24 AM, Shashank Kumar (shanxS) wrote:

> Hey everyone!
> 
> I am a self taught Linux fanatic who is trying to teach himself Sound
> Processing.
> 



-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] OT: Sound Level Meter w/ RS-232/USB/WiFi Output

2012-08-28 Thread Bjorn Roche
I believe the newer B&K sound level meters will do this:

http://www.bksv.com/Products/handheld-instruments/sound-level-meters.aspx

bjorn

On Aug 28, 2012, at 10:58 AM, Kenneth Jacker wrote:

> [ Apologies for this only tangentially related email ... ]
> 
> Does anyone know where I can obtain a "sound meter" (output in dB) whose
> current reading can be obtained by "polling" the device via a RS-232,
> USB, or WiFi connector/port?
> 
> Thanks for any info/ideas!
> 
>  -Kenneth
> -- 
> Prof Kenneth H Jacker   k...@cs.appstate.edu
> Computer Science Dept   www.cs.appstate.edu/~khj
> Appalachian State Univ
> Boone, NC  28608  USA
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] FFT tuner app with source code

2012-07-23 Thread Bjorn Roche
Hey all,

I recently took a moment to write an FFT-based command-line tuner app. It is 
sub-optimal, but it works well enough that I can tune my guitar with it and the 
code is short and somewhat readable so it's good for demonstration.

I chose to write it using the FFT rather than something like YIN because I 
noticed a lot of folks on Stack Overflow ask about pitch detection using the 
FFT. I think they could understand this better than YIN, though if I have time 
I'd love to implement YIN as well.

It compiles and runs on OS X, and should be easy to port.

You can find the source code here: https://github.com/bejayoharen/guitartuner

And a description of how it works here: 
http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html

Corrections/patches appreciated.

bjorn

-----
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] .wav file format conversion tools

2012-06-08 Thread Bjorn Roche
libsndfile comes with a command-line utility for file format conversion. I 
think it would do the trick, but I also think it would have the problem of 
non-transparent int->float->int conversion that libsndfile has.

bjorn

On Jun 8, 2012, at 5:16 PM, Andy Farnell wrote:

> 
> You could write this in a jiffy with libsndfile or something right.
> 
> Andy
> 
> 
> On Fri, Jun 08, 2012 at 01:22:59PM -0700, Linda Seltzer wrote:
>> Does anyone know of a tool that converts a .wav file into wave extensible
>> format with the PCM subtype rather than the IEEE floating point subtype?
>> 
>> Linda Seltzer
>> lselt...@alumni.caltech.edu
>> 
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
>> links
>> http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Moogle

2012-05-23 Thread Bjorn Roche
there was a bit more about it here:

http://www.html5audio.org/2012/05/new-google-doodle-uses-web-audio-api.html
http://www.html5audio.org/2012/05/googles-moog-doodle-falls-back-to-flash.html

I find it very frustrating trying to get audio working on the web now that I 
can't count on flash or java support on all screens.

bjorn

On May 23, 2012, at 1:49 AM, Nigel Redmon wrote:

> The last Google Doodle (the non-sine) made so much noise here, I'm anxious to 
> read what people have to say about the Moog tribute. I had done a search form 
> Safari's search field, so I wasn't met with the full (sort of) "Mini", but 
> just an icon in the upper left of the search window. I'm thinking...if this 
> is because I search for a lot of synth stuff, I'm going to be 
> creeped-out...but a pleasant surprise—the coolest thing since the Les Paul 
> tribute...
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-05 Thread Bjorn Roche

On Apr 5, 2012, at 4:53 AM, Ross Bencina wrote:

> Hey Bjorn,
> 
> On 5/04/2012 1:52 AM, Bjorn Roche wrote:
>> Any thoughts about modernizing the whole thing with a fresh CMS? I
>> think it would be easier to maintain, have built-in spam filters, and
>> it would be easier to have multiple people do the work. Plus it would
>> look more attractive. I don't think it would take much effort to redo
>> the whole thing in, say, drupal.
> 
> Have you ever set up a Drupal site? I have. It is not for small-time, 
> non-commercial, low-maintenance overhead projects imho.

Yes. Quite a few.

> Imho it would be a huge job to port the current site to Drupal and there is a 
> lot of ongoing maintenance required to keep security patches up to date etc 
> etc.

Yes. The biggest problem is security updates. You are right: major PITA factor. 
This can be mitigated by a hosted solution, or a multi-site install where 
someone is already monitoring the site for security updates. But, at the end of 
the day, that might not be realistic.

> Doing the theme port alone would be a lot of work.

I would not dream of porting the existing theme, but rather use a new, or 
built-in theme.

> Unless I'm completely out of touch it is really non-trivial to set up 
> something like musicdsp.org in Drupal with adequate spam filtering. The 
> standard Drupal capcha solution (Mollom) is not great -- in my experience it 
> flags a lot of false positives (spam that isn't spam).

Mollom sucks. Captchas alone catch the vast majority of spam. The rest can be 
handled with moderation.

> Anyway, this is really just a vote against Drupal for musicdsp.org, not 
> against using a CMS.
> 
> I actually think the current ad-hoc php solution is not so bad -- but Bram 
> knows more about these things than me.

Recaptcha could be added to the existing site with fairly little effort, but 
there are other advantages to a CMS: they are easier to team-manage, organize, 
and they have a number of potentially useful features like taxonomies (giving 
the ability to tag and categorize algos by language and purpose for example.)

bjorn

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-04 Thread Bjorn Roche
Hey Bram,

Any thoughts about modernizing the whole thing with a fresh CMS? I think it 
would be easier to maintain, have built-in spam filters, and it would be easier 
to have multiple people do the work. Plus it would look more attractive. I 
don't think it would take much effort to redo the whole thing in, say, drupal. 
Some of the data could be moved from its current form to CMS via a script, and 
other might have to be manually copied, which would be a bummer, but this might 
be a good time to purge old/irrelevant stuff. It's not clear to me how much 
info there is.

Just a thought. I'm not volunteering to do all the work, but I am pretty 
familiar with drupal and happy to get things started, depending on my schedule 
(right now it's a bit uncertain).

bjorn

On Apr 4, 2012, at 11:17 AM, Thomas Young wrote:

> lol wow
> 
> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bram de Jong
> Sent: 04 April 2012 16:15
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] maintaining musicdsp.org
> 
> On Wed, Apr 4, 2012 at 5:11 PM, Thomas Young  
> wrote:
>> Maybe submissions should be added to a moderation queue rather than added 
>> directly (i.e. they need to be manually whitelisted). I don't think a super 
>> quick turnaround on new algorithm submissions is really important for 
>> something like musicdsp.org.
> 
> they ARE added to a queue.
> the queue now contains about 500 spam submissions.
> that's the whole (current) problem.
> 
> some kind of "report as spam" thing for the comments would be nice too as 
> there are SOME (but few) spam comments.
> 
> - bram
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] list postings

2012-02-25 Thread Bjorn Roche

On Feb 25, 2012, at 2:21 PM, douglas repetto wrote:

> 
> And btw, you should receive a message from the list software with links to 
> the list FAQs, which detail various reasons why your messages might not make 
> it through.
> 
> http://music.columbia.edu/cmc/music-dsp/musicdspFAQ.admin.html

Nothing there about rtf. just html. It also says I should have received a 
bounce. Perhaps that should be clarified.


> Reading rich text email is email client dependent. Modern clients look at the 
> headers and interpret the email accordingly.
> 
> I guess that's another argument against allowing non-plaintext -- it makes it 
> really difficult to read the list in old school email clients that have no 
> rich text parsing.


I used to use mutt and pine and I never had a problem with this. Even if they 
can't parse the formatted text, most clients send a plain text version along 
with the formatted version as alternative views, but maybe things have changed.

bjorn

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] list postings

2012-02-25 Thread Bjorn Roche
I've never had an email get through to this list, and I've never gotten a 
rejection notice, which is sad b/c once or twice I've actually had something 
constructive to say. (on two occasions, I've just email the original posters) 
With this email, I am explicitly setting the format to plain, as suggested, and 
we'll see.

It's annoying because gmail filters out sent mail from received mail, so it 
doesn't show you list mail that you sent, and since I don't get a bounce for 
some reason, there's no way to know the email is missed! The best I can do is 
check the archives after its sent :-P

bjorn

On Feb 25, 2012, at 2:07 PM, douglas repetto wrote:

> 
> It may be that Apple is adding something to the header indicating rich 
> text/html even though you don't end up with offending characters in the 
> email. The list software rejects email based on the headers, not on the 
> actual content.
> 
> There's no fundamental reason why the list can't accept html mail, btw. So if 
> people really want it we can make a change. In the past it's been about spam 
> control and saving bandwidth, but those issues aren't such big concerns 
> anymore, I think. Although I personally find that reading a list like this in 
> different fonts/colors/styles can be unpleasant.
> 
> douglas
> 
> On 2/25/12 2:02 PM, Nigel Redmon wrote:
>> I've had problems in the past when html-style font tags make their
>> way into the email. For instance, this happens in Apple's Mail.app.
>> Even though it's not an html email, per se, they sometimes get
>> rejected (but not always). If I do Make Plain Text from the Format
>> menu before sedning, then they always get through—there is no visible
>> change to the email either (because they are just some default font
>> tags—I'm not really formatting the text).
>> 
>> 
>> On Feb 25, 2012, at 10:38 AM, Brad Garton wrote:
>>> Hey music-dsp-ers --
>>> 
>>> Has anyone else experienced troubles getting posts to show up on
>>> our list?  I've sent (and re-sent) several this morning and they
>>> just vanished.  I've checked with douglas about it, but was
>>> wondering if anyone else has had problems.
>>> 
>>> brad http://music.columbia.edu/~brad
>> 
>> -- dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book
>> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>> 
> 
> -- 
> ... http://artbots.org
> .douglas.irving http://dorkbot.org
> .. http://music.columbia.edu/cmc/music-dsp
> ...repetto. http://music.columbia.edu/organism
> ....... http://music.columbia.edu/~douglas
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

-
Bjorn Roche
http://www.xonami.com
Audio Collaboration
http://blog.bjornroche.com




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp