[music-dsp] Errata in The Art of VA Filter Design 2.1.0

2019-01-26 Thread Giulio Moro
Hi there,
really appreciate your VA book. I am reading version 2.1.0 and I think I 
spotted an error: 
on page 93, the text goes:

"In this respect consider that Fig. 3.12 is trying to explicitly emulate the 
analog integration behavior, preserving the topology of the original analog 
structure, while Fig. 3.34 is concerned solely with implementing a correct 
transfer function. Since Fig. 3.34 implements a classical approach to the 
bilinear transform application for digital filter design (which ignores the 
filter topology) we’ll refer to the trapezoidal integration replacement 
technique as the topology-preserving bilinear transform (or, shortly, TPBLT)."

I *think* that it should be "Since Fig. 3.12 implements  ..." instead of 3.34. 

Am I right? If I am not, then I guess I did not understand much about TPT :)

PS: any chance there could be a permalink to the most recent version of the 
book? I seem that I have to go back through the archives of music-dsp to find 
the most recent link (or manually attempt a URL for version 2.1.1). Also, is 
the book linked to from anywhere on the NI website?

Thanks,
Giulio



On Thursday, 1 November 2018, 08:26:35 GMT, Vadim Zavalishin 
 wrote: 





On 31-Oct-18 18:19, Stefan Stenzel wrote:
> Vadim,
> 
> I was more refering to the analog multimode filter based on the moog cascade 
> I did some years ago, and found it amusing to find a warning against it.

Ah, you mean the one at the beginning of Section 5.5? Well, that's an 
artifact of the older revision 1, where the ladder filter was introduced 
before the SVF (I still believe it's better didactically, unfortunately 
new material dependencies made me switch the order). The modal mixtures 
of the transistor ladder are asymmetric (HP is not symmetric to LP and 
has the resonance peak kind of "in the middle of its slope" and BP is 
not symmetric on its own). I felt that it might be confusing for a 
beginner if their first encounter with resonating HP and BP is with this 
kind of special-looking filters, hence the warning. With revision 2 this 
warning becomes less important, since the 2-pole LP and BP were 
discussed already before, but I still believe it's informative. After 
all, it doesn't say that these filters are bad, it says that they are 
special ;)

> 
> Anyway, excellent writeup,

Thank you! I'm glad my book is appreciated not only by newbies, but also 
by the industry experts.


> I wish I cuold have it printed as a proper book for more relaxed reading.

Hmmm, 500 A4 pages would be rather heavy ;)


Vadim

-- 
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] WAV player instrument recommendation

2018-06-14 Thread Giulio Moro
Pure Data ? 
puredata.info

On Thursday, 14 June 2018, 17:08:30 BST, Dave Carpenter 
 wrote: 

Can anyone recommend some simple software available that would allow me to
attach a MIDI keyboard controller to my Windows PC and play individual .wav
files that I provide for each note? Like C4.wav, C#4.wav, D4.wav, D#4.wav,
etc or similar. I'm thinking of virtual instrument, VST, or similar, but it
doesn't have to be. I need the player to instantly recognize updated .wav
files. They will be generated by my external program. I need to be able to
tweak some parameters, re-generate the .wav files, and play and hear the
results quickly without having to go through a lot of extra steps for each
revision. This is the requirement that makes it hard to find a solution.
Thanks for your ideas.  --Dave



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] denormals on ARM processors ?

2017-03-10 Thread Giulio Moro
The functions here 
https://github.com/BelaPlatform/Bela/blob/master/core/math_runfast.c will allow 
to manipulate "fast" mode for ARM vfp (making it non-IEEE-754 compliant). If 
you are using the NEON SIMD (you should!) then it always flushes to zero (which 
is one of the reasons why NEON is non-compliant) 
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0473m/dom1359731193887.html
 .


  From: Philippe Wicker 
 To: music-dsp@music.columbia.edu 
 Sent: Friday, 10 March 2017, 9:00
 Subject: Re: [music-dsp] denormals on ARM processors ?
   
No code but some tips here:
https://developer.arm.com/docs/dui0801/latest/advanced-simd-programming/when-to-use-flush-to-zero-mode-in-advanced-simd

On 10 Mar 2017, at 09:51, Stéphane Letz  wrote:
Hin

Following the discussion on "IIR filter efficiency » about denormals, what is 
then situation regarding denormals on ARM processors? Is there  the equivalent 
of SSE _mm_setcsr stuff to force the process to switch in FTZ mode ? Any code 
to share to do the same on ARM?

Thanks.

Stéphane Letz
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

   ___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] advice regarding USB oscilloscope

2017-03-07 Thread Giulio Moro
Perhaps only tangentially related, but Bela - of which I am one of the 
developers - comes with a browser-based oscilloscope as part of its IDE. This 
allows you to generate both analog inputs and internally, digitally-generated 
signals, e.g.:https://www.youtube.com/watch?v=o9kLZ--js1k
https://www.youtube.com/watch?v=AoP7rPAMpvk
it also has FFT mode (not shown in the videos above).
It is fully programmable in C++, PureData, SuperCollider and Pyo (that is 
Python!)
16 bit inputs and outputs:
* 2 I/O are AC-coupled, sigma-delta (audio), at 44.1kHz* DC-coupled SAR ADCs. 
Input voltage: 0-4.096V. You can have either 8 at 22.05kHz or 4 at 44.1kHz or 2 
at 88.2kHz.* DC-coupled string DACs. Output voltage: 0-5V. You can have either 
8 at 22.05kHz or 4 at 44.1kHz or 2 at 88.2kHz.
Again, may not be the most suitable for your application, but it is fully 
programmable. Scaling of 20V down t 4.096V can be done either through a passive 
resistor voltage divider or active circuitry (which would require an external 
power supply).
Best,Giulio
Giulio MoroPhD researcherCentre For Digital Music (C4DM)
Queen Mary, University of London



  From: Remy Muller 
 To: music-dsp@music.columbia.edu 
 Sent: Tuesday, 7 March 2017, 14:59
 Subject: [music-dsp] advice regarding USB oscilloscope
   
Hi,

I'd like to invest into an USB oscilloscope.

The main purpose is in analog data acquisition and instrumentation. 
Since the main purpose is audio, bandwidth is not really an issue, most 
models seem to provide 20MHz or much more and I'm mostly interested in 
analog inputs, not logical ones.

Ideally I'd like to have

  - Mac, Windows and Linux support

- 4 channels or more

- 16-bit ADC

- up to 20V

- general purpose output generator*

- a scripting API (python preferred)

* I have been told that most oscilloscopes have either no or limited 
output, and that I'd rather use a soundcard for generating dedicated 
test audio signals, synchronizing the oscilloscope acquisition using the 
soundcard's word-clock. However not having to deal with multiple drivers 
and clock synchronization would be more than welcome.

A friend of mine recommended using Picoscope which seems well supported, 
has a strong user community but no official support for python AFAIK.

https://www.picotech.com/oscilloscope/5000/flexible-resolution-oscilloscope

I also found about bitscope http://www.bitscope.com which looks more 
oriented toward the casual hacker/maker, seems more open-ended and has 
python support, much cheaper too.

What about the traditional oscilloscope companies like Tektronix, Rigol ?

Has anyone experience with any of those? or any other reference to 
recommend?


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



   ___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread Giulio Moro
Yup, I was asking because I found that on the M4 it is difficult to do proper 
overlap and add while keeping reasonably small block sizes for FFTs of useful 
length, so I was just curious whether you had to implement threads or just went 
for block sizes synchronous with the FFT step.
If I remember correctly I was using 2048 samples per block and 512 samples 
step, with good results.
Best,Giulio

 
  From: Eric Brombaugh <ebrombau...@cox.net>
 To: music-dsp@music.columbia.edu 
 Sent: Friday, 16 September 2016, 19:50
 Subject: Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect
   
Probably shouldn't reveal too much of the detail, but it likely comes as 
no surprise that the tradeoff between time and frequency resolution is 
critical in systems that have limited CPU horsepower. The FFTs in this 
case do run in the audio processing thread which is synchronous to the 
buffer processing rate.

I'd love to have a go at doing this kind of stuff on something like a 
SHARC where 16kpt FFTs are apparently easy to do at audio rates...

Eric

On 09/16/2016 11:15 AM, Giulio Moro wrote:
> Nice that it runs on the M4F, what FFT size, overlap and audio
> processing block size are you using? Are you running the FFT in a
> separate thread?
>
> Giulio
>
>
>    
>    *From:* Eric Brombaugh <ebrombau...@cox.net>
>    *To:* music-dsp@music.columbia.edu
>    *Sent:* Friday, 16 September 2016, 19:12
>    *Subject:* Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect
>
>    I coded up the spectral freeze core of the Audio Damage "Spectre"
>    module
>    in a similar way:
>
>    http://www.audiodamage.com/hardware/product.php?pid=ADM15
>
>    It's a basic phase vocoder with forward and inverse FFTs but we added
>    some fun little tweaks to shift, stretch and randomize the spectrum. It
>    runs nicely on an STM32F405 ARM Cortex M4F processor.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



   
 ___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread Giulio Moro
Nice that it runs on the M4F, what FFT size, overlap and audio processing block 
size are you using? Are you running the FFT in a separate thread?
Giulio

 
  From: Eric Brombaugh <ebrombau...@cox.net>
 To: music-dsp@music.columbia.edu 
 Sent: Friday, 16 September 2016, 19:12
 Subject: Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect
   
I coded up the spectral freeze core of the Audio Damage "Spectre" module 
in a similar way:

http://www.audiodamage.com/hardware/product.php?pid=ADM15

It's a basic phase vocoder with forward and inverse FFTs but we added 
some fun little tweaks to shift, stretch and randomize the spectrum. It 
runs nicely on an STM32F405 ARM Cortex M4F processor.

Eric

On 09/16/2016 10:59 AM, Giulio Moro wrote:
> Hi,
> I actually implemented this a few years back using an FFT algorithm, I
> can dig out the code if you need it (it was a VST written using Juce and
> fftw, but there was no threading on the FFT if I remember correctly, so
> it is flawed as it is and it requires running with large blocksizes.
> I doubt simple time-domain algorithm would work without obvious
> artefacts, as the periodicity is not guaranteed and the combined period
> could be very long anyhow.
>
> The FFT implementation I made was the text-book phase vocoder but I was
> doing the forward FFT only once at the beginning of a freeze, to
> "sample" the signal, then I would keep the vector of amplitudes constant
> while updating the phase.
>
> Best,
> Giulio
>
>
>    
>    *From:* Spencer Jackson <ssjackso...@gmail.com>
>    *To:* music-dsp@music.columbia.edu
>    *Sent:* Friday, 16 September 2016, 18:30
>    *Subject:* Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect
>
>    On Fri, Sep 16, 2016 at 11:24 AM, gm <g...@voxangelica.net
>    <mailto:g...@voxangelica.net>> wrote:
>      > Did you consider a reverb or an FFT time stretch algorithm?
>      >
>
>    I haven't looked into an FFT algorithm. I'll have to read up on that,
>    but what do you mean with reverb? Would you feed the loop into a
>    reverb or apply some reverberant filter before looping?
>
>
>    Thanks,
>    _Spencer
>    ___
>    dupswapdrop: music-dsp mailing list
>    music-dsp@music.columbia.edu <mailto:music-dsp@music.columbia.edu>
>    https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



   
 ___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect

2016-09-16 Thread Giulio Moro
Hi,I actually implemented this a few years back using an FFT algorithm, I can 
dig out the code if you need it (it was a VST written using Juce and fftw, but 
there was no threading on the FFT if I remember correctly, so it is flawed as 
it is and it requires running with large blocksizes.I doubt simple time-domain 
algorithm would work without obvious artefacts, as the periodicity is not 
guaranteed and the combined period could be very long anyhow.
The FFT implementation I made was the text-book phase vocoder but I was doing 
the forward FFT only once at the beginning of a freeze, to "sample" the signal, 
then I would keep the vector of amplitudes constant while updating the phase.
Best,Giulio

 
  From: Spencer Jackson 
 To: music-dsp@music.columbia.edu 
 Sent: Friday, 16 September 2016, 18:30
 Subject: Re: [music-dsp] Help with "Sound Retainer"/Sostenuto Effect
   
On Fri, Sep 16, 2016 at 11:24 AM, gm  wrote:
> Did you consider a reverb or an FFT time stretch algorithm?
>

I haven't looked into an FFT algorithm. I'll have to read up on that,
but what do you mean with reverb? Would you feed the loop into a
reverb or apply some reverberant filter before looping?

Thanks,
_Spencer
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



   
 ___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Will Pirkle's "Designing Software Synthesizer Plug-Ins in C++"

2016-06-14 Thread Giulio Moro
No pun intended

> On 14 Jun 2016, at 23:59, James McCartney  wrote:
> 
> I also like this book.
> This diagram on page 173, is especially good:
> 
> http://i.imgur.com/lNiYzxJ.jpg
> 
> 
> 
>> On Tue, Jun 14, 2016 at 10:29 AM, David Lowenfels 
>>  wrote:
>> Hi, I just purchased Will Pirkle’s textbook "Designing Software Synthesizer 
>> Plug-Ins in C++”
>> and wanted to give a huge thumbs up. It demystifies so many state-of-the-art 
>> things about virtual analog, including filters (delay-free loops!), 
>> band-limited oscillators, envelope generators, modulation matrices, etc. And 
>> also goes into heavy detail on the ins and outs of AU and VST (and his own 
>> platform RAFX).
>> 
>> As a fledgling music-dsp coder, I really wish I’d had a practical manual 
>> such as this!
>> I was frustrated in university that I could write algorithms and DSP code 
>> but didn’t know how to package it into a plugin, similarly to how I could 
>> design crazy hardware/software on paper and breadboards but didn’t know how 
>> to make a PCB or surface-mount solder to put it in a box.
>> I also look forward to perusing his other book on Digital Audio effects.
>> 
>> -David
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 
> 
> 
> -- 
> --- james mccartney
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Android related audio group / mailing list?

2016-05-25 Thread Giulio Moro
Also, only way to actually know if your compiler generates Neon is to ... read 
the assembly!
Some common DSP tasks are implemented in the Ne10 library 
http://projectne10.github.io/Ne10/doc/ which compiles to Neon on systems that 
support it.
I have seen that nova_simd is good at generating neon code, but I never used it 
stand-alone (only as part of Supercollider)
Best,Giulio

 
  From: Nuno Santos 
 To: music-dsp@music.columbia.edu 
 Sent: Wednesday, 25 May 2016, 14:15
 Subject: Re: [music-dsp] Android related audio group / mailing list?
   
I have been testing DRC in 2 different Android devices: Nexus 9 (48000/128 
buffer size) phone, Bq Aquaris M5 phone (48000/912) buffer size
Nexus 9 is a beast. I was able to run DRC with full polyphony (8 voices) 
without any kind of glitch. The fake touches hack was essential to make this 
happen. Without the fake touches hack I would hear glitches.Bq Aquarius M5 has 
a very similar processor to Nexus 6P and I couldn’t have more than 2 voices 
running without having some glitches. 
All DSP code is C++. Reverb, Delay and Chorus represent half of the processing 
effort the rest is for active voices. From my experiences I couldn’t see any 
kind of effect in performance by compiling the code with NEON enabled. For 
example, on my Bq phone for a 912 buffer size at 48000 my processing code would 
take the following time with the following flags enabled:
-Os - ~5ms -O3 - ~5ms-Ofast - ~5ms-Ofast -mfpu=neon - ~5ms
No significant changes in performance with different flags. What kind of flags 
are you guys using for your Android apps?
I don’t need to worry with this on iOS though. It simply works! 
Regards,
Nuno Santos
Founder / CEO / CTO
www.imaginando.pt+351 91 621 69 62

On 25 May 2016, at 12:24, Jean-Baptiste Thiebaut  wrote:
At JUCE / ROLI we've been working with Google for over a year to optimize audio 
latency, throttle, etc for cross platform apps. Our app is featured also in the 
Youtube video from Google IO, and it runs on some devices with performances 
comparable to iOS.  

Whether you are using JUCE or not, you're welcome to post on our forum 
(forum.juce.com). 

Sent from my mobile


On 25 May 2016, at 11:57, grh  wrote:

Hallo!

Thanks Nuno, it was a great demo ;)

LG
Georg


On 2016-05-25 12:46, Nuno Santos wrote:
Hi George,

I would be interested in such a community as well. Specially regarding
audio performance. We have recently released DRC (one of the apps that
has been featured on Google I/O Android High Performance Audio) and we
are mostly interested in squeezing performance out of it. It is
incredible the performance differences between iOS and Android. The DSP
code is shared among both and I still have glitch problems in Android
powerful devices.

One option could be creating a slack channel. 

Regards,

Nuno Santos
Founder / CEO / CTO
www.imaginando.pt 
+351 91 621 69 62


On 25 May 2016, at 11:36, grh > wrote:

Hallo music-dsp list!

Sorry for being off topic, but does someone know an active discussion
group / mailing list about android audio?
(There is quite a lot of progress lately, see for example [1])
5 years ago a list was announced here [2], which does not seem to be
active anymore ...

We just created a simple android audio editor [3] and would be very much
interested into a discussion of common infrastructure like audio
plugins/effects (like SAPA from Samsung) or copy/paste between audio apps.
I think that would be important for the audio ecosystem on Android.

Thanks for any hints,
LG
Georg

[1]: https://www.youtube.com/watch?v=F2ZDp-eNrh4
[2]:
http://music-dsp.music.columbia.narkive.com/zbYgicxy/new-android-audio-developers-mailing-list
[3]:
https://play.google.com/store/apps/details?id=com.auphonic.auphonicrecorder

-- 
auphonic - audio post production software and web service
http://auphonic.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



-- 
auphonic - audio post production software and web service
http://auphonic.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


-- 


*ROLI's **award-winning* 
*
 Seaboard 
GRAND, celebrated as the "**piano of the future* 
*",
 
is now joined by the **Seaboard RISE* 
*, "**every bit as slimline 
and attractive as its bigger 

Re: [music-dsp] Android related audio group / mailing list?

2016-05-25 Thread Giulio Moro
Hello,I watched the google/android video the other day and I was very surprised 
of the basic recommendations that were given to developers:- use native 
sampling rate (don't hardcode sampling rate, request it from the device)
- use native buffering (don't hardcode it. But then maybe use double or 
quadruple buffering in the app, because your CPU is not fast enough)- compile 
for Release and enable Neon- don't log at every audio callback- don't 
lock/allocate in the audio callback- use less than 20% of the CPU time
I mean, really? Is this what was lacking so far in Android "professional" 
audio? These are the basics of real-time programming, as Ross teaches us 
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
 and of general programming (compiler optimization, Release mode).
The useful bits of information I got are:- simulate screen taps every second so 
that the CPU is always kept to full clock and no energy save. >> surely a 
workaround. As the people speaking have worked on Android for the past three 
years I am sure they could have supported this from the Android API- current 
target is to achieve 10ms roundtrip>> Which means that not even the nexus 6p 
does it at the moment.- a device for measuring roundtrip latency (using a 
teensy and a custom app, 22:48) https://github.com/google/walt
I was not very impressed by the quality of the video production overall, as it 
features:
- a physician introduced as "someone that cares all about numbers"- a calibre 
(22:17)- table of latency results which does not have any sense (23:12)

Best,Giulio
 
  From: grh 
 To: music-dsp@music.columbia.edu 
 Sent: Wednesday, 25 May 2016, 11:57
 Subject: Re: [music-dsp] Android related audio group / mailing list?
   
Hallo!

Thanks Nuno, it was a great demo ;)

LG
Georg

On 2016-05-25 12:46, Nuno Santos wrote:
> Hi George,
> 
> I would be interested in such a community as well. Specially regarding
> audio performance. We have recently released DRC (one of the apps that
> has been featured on Google I/O Android High Performance Audio) and we
> are mostly interested in squeezing performance out of it. It is
> incredible the performance differences between iOS and Android. The DSP
> code is shared among both and I still have glitch problems in Android
> powerful devices.
> 
> One option could be creating a slack channel. 
> 
> Regards,
> 
> Nuno Santos
> Founder / CEO / CTO
> www.imaginando.pt 
> +351 91 621 69 62
> 
>> On 25 May 2016, at 11:36, grh > wrote:
>>
>> Hallo music-dsp list!
>>
>> Sorry for being off topic, but does someone know an active discussion
>> group / mailing list about android audio?
>> (There is quite a lot of progress lately, see for example [1])
>> 5 years ago a list was announced here [2], which does not seem to be
>> active anymore ...
>>
>> We just created a simple android audio editor [3] and would be very much
>> interested into a discussion of common infrastructure like audio
>> plugins/effects (like SAPA from Samsung) or copy/paste between audio apps.
>> I think that would be important for the audio ecosystem on Android.
>>
>> Thanks for any hints,
>> LG
>> Georg
>>
>> [1]: https://www.youtube.com/watch?v=F2ZDp-eNrh4
>> [2]:
>> http://music-dsp.music.columbia.narkive.com/zbYgicxy/new-android-audio-developers-mailing-list
>> [3]:
>> https://play.google.com/store/apps/details?id=com.auphonic.auphonicrecorder
>>
>> -- 
>> auphonic - audio post production software and web service
>> http://auphonic.com
>>
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 
> 
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


-- 
auphonic - audio post production software and web service
http://auphonic.com

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

   
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] *** GMX Spamverdacht *** Bela low-latency audio platform

2016-03-22 Thread Giulio Moro
Hi,I am one of the developers.Sure you can use your editor of choice. The 
browser-based IDE is a convenient, ready-to-go solution which acts as a 
front-end for the building scripts we provide, which build the code on the 
board. The scripts can, alternatively, be reached through the command line, or 
we provide instructions to setup a cross-compiling environment using Eclipse as 
an IDE. The browser-based IDE currently is the only way to visualize the 
on-board oscilloscope.Best,Giulio



 
  From: Johannes Kroll 
 To: music-dsp@music.columbia.edu 
 Sent: Tuesday, 22 March 2016, 23:46
 Subject: Re: [music-dsp] *** GMX Spamverdacht *** Bela low-latency audio 
platform
   
On Tue, 22 Mar 2016 20:43:28 +0100
Andrew McPherson  wrote:

> Hi all,
> 
> I'd like to announce the upcoming release of Bela (http://bela.io), an 
> embedded audio/sensor platform based on the BeagleBone Black which features 
> extremely low latency (< 1ms from action to sound). 

Sounds interesting. The website says something about "browser-based IDE
built with Node.js". Can it be programmed normally in C or C++, using
whichever editor I prefer, without the browser-based stuff?
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



   
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to handle filter state?

2016-03-01 Thread Giulio Moro
Have a look at these
G. Stoyanov and M. Kawamata, “Variable digital filters,” J.Signal Processing, 
vol. 1, no. 4, pp. 275–289, 1997.

J. Laroche, “On the stability of time-varying recursive filters,”Journal of the 
Audio Engineering Society, vol. 55, no.6, pp. 460–471, 2007. 
V. Välimäki and T.I. Laakso, “Suppression of transients intime-varying 
recursive filters for audio signals,” in Proceedingsof the 1998 IEEE 
International Conference on Acoustics,Speech and Signal Processing, 1998, May 
1998, vol. 6,pp. 3569–3572 vol.6. 
A. Wishnick, “Time-varying filters for musical applications,”in Proceedings of 
the 17th International Conference on DigitalAudio Effects (DAFx-14), 2014, pp. 
69–76.



 
  From: Paul Stoffregen 
 To: music-dsp@music.columbia.edu 
 Sent: Tuesday, 1 March 2016, 14:56
 Subject: [music-dsp] Changing Biquad filter coefficients on-the-fly, how to 
handle filter state?
   
Does anyone have any suggestions or publications or references to best 
practices for what to do with the state variables of a biquad filter 
when changing the coefficients?

For a bit of background, I implement a Biquad Direct Form 1 filter in 
this audio library.  It works well.

https://github.com/PaulStoffregen/Audio/blob/master/filter_biquad.cpp#L94

There's a function which allows the user to change the 5 coefficients.  
Lines 94 & 95 set the 4 filter state variables (which are 16 bits, 
packed into two 32 bit integers) to zero.  I did this clear-to-zero out 
of an abundance of caution, for concern (maybe paranoia) that a stable 
filter might do something unexpected or unstable if the 4 state 
variables are initialized with non-zero values.

The problem is people wish to change the coefficients in real time with 
as little audible artifact as possible between the old and new filter 
response.  Clearing the state to zero usually results in a very 
noticeable click or pop sound.

https://github.com/PaulStoffregen/Audio/issues/171

Am I just being overly paranoid by setting all 4 state variables to 
zero?  If "bad things" could happen, are there any guidelines about how 
to manage the filter state safely, but with with as graceful a 
transition as possible?



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



   
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] automation of parametric EQ .

2015-12-21 Thread Giulio Moro
This paper on the Audio Effects Ontology extension shows a possible direction 
towards unified parameters for a given class of effects (see, e.g.: sec 4.3 
Effect 
Parameters)http://ismir2013.ismir.net/wp-content/uploads/2013/09/41_Paper.pdf

Disclaimer: I work in the same research group as the authors.

 
  From: Jim Wintermyre 
 To: r...@audioimagination.com; music-dsp@music.columbia.edu 
 Sent: Tuesday, 22 December 2015, 0:02
 Subject: Re: [music-dsp] automation of parametric EQ .
   
> so then, in your session, you mix some kinda nice sound, save all of the 
> sliders in PT automation and then ask "What would this sound like if I used 
> iZ instead of McDSP?", can you or can you not apply that automation to 
> corresponding parameters of the other plugin?  i thought that you could.

Nope, sorry.  You could maybe manually massage the automation data, but AFAIK 
there’s no automatic support for this in any DAW.

Jim

On Dec 21, 2015, at 3:46 PM, robert bristow-johnson  
wrote:

> 
> thank you to Nigel, Thomas, Bjorn, and Steffan.
> 
> essentially you're telling me there is no existing standard of control number 
> assignment or of scaling and offset of that control.
> 
> regarding MIDI 1.0 (which is what goes into MIDI files), i had noticed that 
> there were some "predefined controls", like MIDI control 7 (and 39 for the 
> lower-order bits) for Volume.  i just thought there might have evolved a 
> common practice of some of the unassigned MIDI controls having a loose 
> assignment to tone or EQ parameters.  it would seem to me to be logical that 
> 0x40 would be 0 dB (dunno what the scaling would be, maybe 1/2 dB per step) 
> and for frequency, to use the same as MIDI NoteOn (and with the control LSB, 
> you could tune it to better than 1 cent precision).  i just would have 
> thought that by now, 30+ years later, that a common practice would have 
> evolved and something would have been published (and i could not find 
> anything).
> 
> regarding Pro Tools (which i do not own and haven't worked with since 2002 
> when i was at Wave Mechanics, now called SoundToys), please take a look at 
> this blog:
> 
> http://www.avidblogs.com/pro-tools-11-analog-console/
> 
> evidently, for a single channel strip, there is a volume slider, but no 
> "built-in" EQ, like in an analog board.  you're s'pose to insert EQ III or 
> something like that.
> 
> now in the avid blog, words like these are written: "... which of the 20 EQ 
> plug-ins should I use?... You can build an SSL, or a Neve, ..., Sonnox, 
> McDSP, iZotope, MetricHalo..."
> 
> so then, in your session, you mix some kinda nice sound, save all of the 
> sliders in PT automation and then ask "What would this sound like if I used 
> iZ instead of McDSP?", can you or can you not apply that automation to 
> corresponding parameters of the other plugin?  i thought that you could.
> 
> *if* that's the case, then there is the "Some have the SSL sound, some have 
> the Neve sound, etc..." and i am wondering if, when comparing the different 
> sounds, they are comparing apples to oranges.  if you are EQing something, 
> got the mix to sound "right" with one particular EQ plugin with say, EQ III, 
> then decide to A/B test it against iZ or something, is it the case that the 
> iZ EQ may sound different, simply because the automation sliders mapped to 
> the EQ parameters differently?
> 
> if that is the case, then, IMO, someone in some standards committee at NAMM 
> or AES or something should be pushing for standardization of some *known* 
> common parameters.
> 
> i am thinking of putting together a discussion panel (at the next U.S. AES in 
> LA in October) called "Q vs. Q" (name stolen from Geoff Martin with his 
> permission) where some folks that have been involved with the parametric EQ 
> from the beginning can discuss how they put the tick marks on the Q knob or 
> the BW knob.  (and then they can complain about the Cookbook Q being 
> unnatural or whatever.)  i have discussed with some pretty important folks 
> about this and have heard at least 3 different definitions (all different, in 
> some sense, from the standard EE definition of Q) and i read at Rane Notes 
> and other places about "Constant Q" vs. "Variable Q" vs. "Proportional Q" vs. 
> "Perfect Q".  WTF do all these terms mean???
> 
> whatever the definition, it should eventually translate to an unambiguous EE 
> Q for an analog EQ or a corresponding digital EQ at decently low frequencies 
> (this bandwidth cramping from bilinear transform is another issue to discuss).
> 
> this, on top of the generalization that Knud Bank Christensen did last decade 
> (which sorta supersedes the Orfanidis correction to the digital parametric 
> EQ), really nails the specification problem down:  whether it's analog or 
> digital, if it's 2nd-order (and not some kinda FIR EQ), then there are 5 
> knobs corresponding to 5 coefficients that *fully* define the 

Re: [music-dsp] Instant frequency recognition

2014-07-17 Thread Giulio Moro
I guess my point is that I'm struggling to think of an application where 

such strong prior knowledge exists, and where we'd still need to estimate 
frequencies from data. 

One such application would be a CV to controls(Midi,OSC,whatever) converter. As 
virtually all the soundcard inputs are AC coupled, it is often impossible to 
plug the CV from your ancient analog modular into your soundcard's input. What 
you can do is connect the output of your analog oscillator (which is a periodic 
waveform of given amplitude and shape) to the input of the soundcard, track its 
frequency and use it to generate controls for ... whatever: other oscillators, 
filters ...

Giulio



 Da: Ethan Duni ethan.d...@gmail.com
A: A discussion list for music-related DSP music-dsp@music.columbia.edu 
Inviato: Venerdì 18 Luglio 2014 2:35
Oggetto: Re: [music-dsp] Instant frequency recognition
 

Yeah, that's basically the chirp decomposition I was referring to earlier.
I.e., if you can write the signal as (for example) A*cos(f(t)), then you
can take the derivative of f(t) and call the resulting function the
instantaneous frequency, in a well-defined, meaningful way. But that only
works for a certain constrained class of signals. It runs into several
fundamental problems if you try to apply it to a generic signal. And even
if you solve those, you're in all cases using data segments much longer
than any of the fundamental periods in question, in order to do the
estimation.

But that's not what the OP in this thread was suggesting. The idea was to
do frequency estimation on arbitrarily short time segments:

 Particularly, if we get a mixture of
 several static sinusoidal signals, they all will be properly restored from
 an arbitrarily short fragment of the signal.

That is not the same thing as instantaneous frequency in the chirp sense.
The idea is to estimate a *fixed* frequency from a very short time
fragment.

The thing about this approach is that it requires very strong prior
knowledge of the signal structure - to the point of saying quite a lot
about how it behaves over all time - in order to work. I.e., if you have a
signal that you know is a sine wave with given amplitude and phase, you can
work out its frequency from a very short length of time. But that's only
because you have very strong prior knowledge that relates the behavior of
the signal in any short time period to the behavior of the signal over all
time.

I guess my point is that I'm struggling to think of an application where
such strong prior knowledge exists, and where we'd still need to estimate
frequencies from data.

E


On Thu, Jul 17, 2014 at 4:19 PM, zhiguang e zhang ericzh...@gmail.com
wrote:

 This post explains the concept instantaneous frequency well: (It is
 basically used to distinguish amplitude from phase)


 http://math.stackexchange.com/questions/85388/does-the-phrase-instantaneous-frequency-make-sense

 EZ
 On Jul 17, 2014, at 6:40 PM, Ethan Duni ethan.d...@gmail.com wrote:

  Sinc interpolation would be theoretically correct, but, remember,
  that this thread is not about strictily theoretically correct
 frequency
  recognition, but rather about some more intuitive version with the
  concept of instant frequency.
 
  What is instant frequency? I have to say that I find this concept to be
  highly counter-intuitive on its face. How can we speak meaningfully about
  frequency on time scales shorter than one period?
 
  Maybe we could attempt exactly fitting a set of samples into a sum
  of sines of different frequencies? Each sine corresponding to 3 degrees
  of freedom.
 
  Yeah, this is called sinusoidal modeling. But I don't see how it give you
  any handle on instantaneous frequency. If you're operating on time
 scales
  shorter than the periods of the frequencies in question, then the basis
  functions you're using in sinusoidal modeling do not exhibit any
 meaningful
  periodicity, but instead look something like low-order polynomials. The
  frequency parameters you'd estimate would be meaningless as such - they'd
  jump around all over the place from frame to frame, depending on exactly
  how the frame being analyzed lined up with the basis functions. I.e.,
  they'd just be abstract parameters specifying some
 low-order-polynomial-ish
  shapes, and not indicating anything meaningful about periodicity.
 
  The only way I can see to speak meaningfully about instantaneous
 frequency
  is if you were to decompose a signal with some kind of chirp basis - on a
  time scale much longer than any particular period seen in the chirp
 basis.
  Then you could turn around and say that the frequency is evolving
 according
  to the chirp parameter, and talk about an instantaneous frequency at any
  particular time. But not that this requires doing the analysis on a
 rather
  *long* time scale, so you can be confident that the chirp structure
 you're
  finding actually corresponds to some real signal content.
 
  E
 
 
  On