Re: [music-dsp] the time it takes to design a reverberator and related

2020-05-22 Thread Andy Farnell
I'd say it's definitely at Art, notwithstanding the hard
engineering skills needed to accomplish it.

In any task, ask yourself where the skilled human work is located.
The work in this case is _listening_. A great reverb has to
work with a wide range of instruments, to flatter the 
different transients, sustained timbres and spectral
morphologies of performance. That can mean hours of tweaking
values and careful aesthetic evaluation.

Of course there are shortcuts and powerful tools to employ.
These days we can go from impulse responses to geometric 
topologies and then to parametric models ... but there's
still work in selecting rooms to sample, and selecting what
features of that space are of interest... that's hard aesthetic
work.

Following my colleague Nikolay Georgiev was an eye opener;
the way he researches spaces, abandoned mines, remote 
cathedrals in Eastern Europe.. the work that goes into this
is comparable with a film location director, including a 
detailed knowledge of materials science, architecture.
Not to mention the travel costs and hazardous recording 
adventures. And those impulses are just the raw materials 
for the algorithms he then uses to create plugins.

Sometimes you get lucky with a mathematical insight - like the 
Fibonacci reverbs I stumbled upon many years ago... but
most of it is bloody hard work comparable with any serious
sound design... so I would not uner-rate it.

A problem that we face in technological arts is that work
is devalued by "managerial types", for whom technology is
a kind of disposable magic - being surrounded every day 
by miraculous accomplishments built on the shoulders 
of giants engenders a cavalier nonchalance for skills
which are the products of decades of study and experience.
So called "AI" is only going to make this worse.

As Martin says, the goals are everything... why bother to
create a reverb by careful design when you can shove a few
random prime numbers into a delay lattice and 99% of people
won;t notice the difference?

good health to you all,
Andy

On Fri, May 22, 2020 at 10:10:51AM +0200, Martin Lind wrote:
> The amount of developing time for a reverb algorithm is very much depended
> on the goals and target for that particular algorithm. If you want to make a
> high-end classic and start from scratch then it takes way more time than
> contract work.
> 
> VSS series from TC Electronic took 8 engineers almost 10 years to finish.
> Bricasti (Casey) use at least 4-5+ years for a single algorithm.
> Lexicon (David Griesinger) worked 3 years on the HD algorithm.
> And so on.
> 
> The above is obviously flagship products and not contract work. And none of
> them use FDN in the traditional sense.
> 
> 
> -Original Message-
> From: music-dsp-boun...@music.columbia.edu
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of gm
> Sent: 21. maj 2020 23:04
> To: music-dsp@music.columbia.edu
> Subject: [music-dsp] the time it takes to design a reverberator and related
> 
> 
> I need some possibly quotable real world opinions and experiences on how 
> long stuff
> can take to design or develop, especially takeing Hofstadter's Law into 
> account
> 
> For instance reverberators, hard to estimate, and I dont recall all the 
> times I spent exactly
> I tried so many things on different occasions so long ago, improved 
> things, disimproved them
> but my estimate is that it takes many months experience (at least) and 
> experimenting to come
> to good and really good results.
> Especially if you start with FDNs first and waste a long time on them...
> If you have experience and start from scratch it takes days or weeks to 
> refine your design.
> 
> You may however have at some point developed prototypes that you can 
> reuse and modify and do not change too much any more.
> 
> Two years ago or so I posted a kind of non-paper here on "magic numbers 
> in reverb design" where I claimed
> having found a "perfect" ratio for allpass delay stage lengths. I could 
> never decide if its kind of nonsense or not since
> the method gives quite good results, but I think I used other numbers 
> afterwards myself IIRC. I am not even sure at the moment...
> 
> Does anybody recall that paper and did anybody ever try and remember the 
> results?
> Did it speed developement up for you? Did it make any sense to you at 
> all (its written in a weird way)?
> 
> Would you call a good reverb algorithm a piece of art?
> 
> Since the process can take so eratically long, and since you can go back 
> and forth many times,
> what do you think a reasonable time estimate would be? How much time 
> would you charge for that reverb, reasonably?
> 
> How and when do you decide it's finished and that you don't change 
> parameters any more?
> 
> How many times and for how long did you try to make "the most efficient 
> reverberator you can get away with"?
> Did you ever succeed in that quest?
> 
> Do you think there is something like a "most reasonable" reverb design?
> 
> 

Re: [music-dsp] Auto-tune sounds like vocoder

2019-01-16 Thread Andy Farnell
On Tue, Jan 15, 2019 at 08:05:11PM +0100, David Reaves wrote:

> I’m wondering about why the ever-prevalent auto-tune effect in much 
> of today's (cough!) music (cough!) seems, to my ears, to have such 
> a vocoder-y sound to it. Are the two effects related?

So, I would say yes, they're related. Weakly. As Sampo says,
the method is essentially a grain-wise Fourier reconstruction.
Upshot is it sounds like a vocoder because it is the voice 
'vocoded' with a pulse stream at near to the original fundamental
(but corrected). Additionally two other things enhance the 
psychoacoustic impression that it's a classic vocoder. First
is the pitch quantisation, so when you glissando there's
a stepped effect that makes the banding stand out more. 
And second, as Ben says, some mixing of the dry and wet usually
produces a chorus/flanger effect on top.

Disclaimer: I have never seen the Antares source code so
could be guessing very wrongly, but that's what my ears think.

best,
Andy





signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Clock drift and compensation

2018-01-23 Thread Andy Farnell
On Tue, Jan 23, 2018 at 04:17:40PM +, Benny Alexandar wrote:

> How to design a control system such that a digital baseband frame of duration
> 'T' ms is mapped to audio and adjust the drift ?

A classic asynchronous resampling problem. Look at something like 
SMPTE drop frame resampling using div/modulo to calculate the 
number of frames of m samples over which to interpolate to get
some new number of n samples.  

Real problem is that you need to know the difference/drift 
in the clocks. Is there some feature in your signal that helps
with this? 



> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp



signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] Acoustics lecturer position. Southampton UK.

2017-11-23 Thread Andy Farnell
Acoustics and audio engineering lecturer (senior) F/T Southampton 
Solent University 

Apologies for x-posting

A colleague has mentioned that they are looking for someone to teach
on the audio engineering and acoustics programme at Southampton
Solent. Ideally someone with great physics, a sound knowledge of
studio and interior acoustics, and either experience in industry with
a masters level academic record, or a relevant PhD. Familiarity with
multi-channel audio, architectural acoustics, audio systems design, or
consulting in treatment would be useful. The role is mainly teaching
and some research and consultancy. 

Southampton is on the south coast of England and is a great place to 
work and live and there is a relocation incentive.

Please look for the posting in the next few days on :

https://www.solent.ac.uk

best to all,
Andy Farnell


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] intensity of Doppler effect

2017-10-17 Thread Andy Farnell
This is a quite interesting, and less than obvious question,
that I keep on thinking about.

My thoughts turned to the phenomena of sonic boom. Where the velocity of a 
source is equal to c, the 
propagation velocity of the medium, then we have an extreme limit of Doppler, 
in which all the acoustic energy 
is condensed into a single high amplitude solitary front. Only at the point 
where the listener is co-located 
with the source, or passes through the trailing Mach cone, is a single high 
amplitude impulse heard.

Can Doppler be interpreted in terms of Mach?  They seem to tie together in 
relativistic wave physics.  If so, 
given this extreme interpretation of Doppler, does amplitude change for other 
cases, where source velocity is 
just less than c?  And by extension, values much less than c?

But for sound we need to remove extraenuous factors.  Trivially, something 
sounds louder if its moving towards 
you because it is "getting closer". We need to remove geometric loss by 
assuming a source at a very large 
(infinite) distance such that the waves at the receiver are still Huygens 
constructions but are effectively 
planar (analogy of sunlight through pinhole). Then we need to remove all real 
gas laws (adiabatic, viscous 
losses) for the medium, making our acoustic waves perfect (Riemann) plane 
waves. Now, if an object radiating 10 
Watts of energy stays in the same place, and is observed at some distance D, 
for 10 seconds, the receiver gets 
all of that energy from thr source incident upon it, 10 Watts of acoustic 
energy, and during the time it 
absorbs 100 Joules of energy. If we do not have to assume the rest of energy 
radiates away in other directions, 
all of the energy emitted from our perfect "sound laser" must be received by 
the observer.

Then we can use the pitcher and catcher analogy often used in other relativity 
thought experiments.

Also, there must be a certain energetic "channel capacity" (pitched balls 
already in the air). If D > 340m and 
the stationary source begins emitting energy for one second, then ceases, the 
medium of the channel will 
contain a one second (10 Joules) burst of energy.

If the channel is "already full", and during the next moments either source or 
receiver move toward one 
another, then the receiver must absorb what is in the channel plus energy 
emitted by the source during that 
time (catcher has to collect more pitched balls per second).

So far we are taking the extra energy to be accounted for by increase in 
frequency, as for Planck's equations, 
(Evan already mentioned particle velocity and the fact that particle k.e. = 
1/2mv^2, but care is needed when 
relating this to "intensity", as in acoustics we may mistakenly involve 
perceptual (psychoacoustic) factors. 
What we are really interested in, is whether a measurable increase in amplitude 
occurs in addition to the 
expected frequency change.

As I was researching my reading landed me here:

en.wikipedia.org/wiki/Relativistic_Doppler_effect

Where I found this passage, which uses terms unfamiliar in acoustics, but I am 
sure a capable mathematician can 
connect the dots...

"Doppler effect on intensity[edit]

The Doppler effect (with arbitrary direction) also modifies the perceived 
source intensity: this can be 
expressed concisely by the fact that sourcefstrength divided by the cube of the 
frequency is{a Lorentz 
invariant^[5] (here, "source strength" refers to spectral intensity in 
frequency, i.e., power per unit solid 
angle and per unit frequency, expressed in watts per steradian per hertz; for 
spectral intensity in wavelength, 
the cube should be replaced by a fifth power). This implies that the total 
radiant intensity (summing over all 
frequencies) is multiplied by the fourth power of the Doppler factor for 
frequency."

So, I guess the question is "are longitudinal acoustic pressure waves subject 
to relativistic Doppler 
effects?". I believe the answer is yes, and the OP usefully points out that 
most classical sound textbooks omit 
the amplitude aspect of the effect.

It have some important implications in vehicular noise models, and is likely 
already a component of more 
sophisticated models.

best,
Andy



On Thu, Oct 12, 2017 at 11:13:55AM -0400, Ethan Fenn wrote:
> >
> > Since only the speed difference between sender and receiver does matter:
> > No.
> > This article is pretty thorough on this topic:
> > https://en.wikipedia.org/wiki/Doppler_effect
> 
> 
> Well, according to the article the pitch change will not be identical in
> those two scenarios. But it will be approximately the same for speeds well
> below the speed of sound.
> 
> -Ethan
> 
> 
> 
> On Thu, Oct 12, 2017 at 11:02 AM, STEFFAN DIEDRICHSEN 
> wrote:
> 
> >
> > On 12.10.2017|KW41, at 16:31, Phil Burk  wrote:
> >
> > Do the two cases sound different?
> >
> >
> > Since only the speed difference between sender and receiver does matter:
> > No.
> >
> > This article is pretty thorough on this topic:
> > https

Re: [music-dsp] Reverb, magic numbers and random generators

2017-10-16 Thread Andy Farnell

Bit late to the thread, but if you look around Pd archives you will 
find a patch called Fiboverb that I made about 2006/7. As you surmise,
the relative co-primality of fib(n) sequence has great properties
for diffuse reverbs.

Just reading about the proposed Go spacing idea, seems very interesting.

best
Andy

On Wed, Sep 27, 2017 at 05:00:13PM +0200, gm wrote:
> 
> I have this idée fixe that a reverb bears some resemblance with some
> types of random number generators especially the lagged Fibonacci
> generator.
> 
> Consider the simplified model reverb block
> 
> 
>  +-> [AP Diffusor AP1] -> [AP Diffusor Ap2] -> [Delay D] ->
>  |  |
>  -<--
> 
> 
> and the (lagged) fibonacci generator
> 
> xn = xn-j + xn-k (mod m)
> 
> The delay and feedback is similar to a modulus operation (wrapping)
> in that that
> the signal is "folded", and creates similar kinds of patterns if you
> regard the
> delay length as a period.
> (convolution is called "folding" in Germand btw)
> 
> For instance, if the Delay of the allpass diffusor length is set to
> 0.6 times the delay length
> you will get an impulse pattern in the period that is related to the
> pattern of the operation
> xn = xn-1 + 0.6 (mod 1) if you graph that on a tile.
> 
> And the quest in reverb designing is to find relationhips for the AP Delays
> that result in a smooth, even and quasirandom impulse responses.
> A good test is the autocorrelation function wich should ideally be
> an impulse on a uniform noise floor.
> 
> So my idea was to relate the delay time D to m and set the AP Delays
> to D*(Number/m),
> where Number is the suggested numbers j and k for the fibonacci generator.
> 
> The results however were mixed, and I cant say they were better than
> setting the
> times to the arbitray values I have been using before.
> (Which were based on some crude assumptions about distributing the
> initial impulse as fast as possible, fine tuning per ear and
> rational coprime aproximations for voodoo).
> The results were not too bad either, so they are different from
> random cause the numbers Number/m
> have certain values and their values are actually somewhat similar
> to the values I was using.
> 
> Any ideas on that?
> Does any of this make sense?
> Suggestions?
> Improvements?
> How do you determin your diffusion delay times?
> What would be ideal AP delay time ratios for the simplified model
> reverb above?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] basic in-store speaker evaluation

2017-07-04 Thread Andy Farnell
On Tue, Jul 04, 2017 at 01:17:49PM +0300, Sampo Syreeni wrote:
> Is there an extant software out there which lets me do comparisons
> between various speaker sets? 

Its a good idea.

> Something in the vein of "just put in
> a test DVD-A, and let your Android app run"?

DVD? Maybe in 2000. Not knowing the playback capabilities at the store
you'd be better to put the test files online, in a spotify or soundcloud
channel.

> magnitude, and a bunch of synch signals, in some reasonable
> combination. So that you could at least in theory do synchronous
> detection of whatever you hear from your test DVD-A, simply by
> listening to it via your phone.

The room and listening position will be variables, needs factoring 
out clientside, so quite a bit of DSP on the mobile.

> Obviously you couldn't help your phone's pickup being uneven, that

You need to know the phone, the app must do a client detect and
look up a database because there are large variances between
devices.

cheers,
Andy



signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] AES UK 2017 UP Event

2017-02-09 Thread Andy Farnell
Apologies for cross-postings A short notice about a UK AES event I am
helping out with that may be of interest to our list members.

AES Up Your Output 2017 is a two day event aimed at students of
acoustics, audio engineering, signal processing and music
technology. This year it will be held at Southampton Solent University
UK on Sat 18th Sun 19th March, and will include a great range of
industry keynotes and practical workshops, on subjects from audio
mastering to spatial audio in live broadcast. There is a student
poster competition with prizes of plug-in packs from SSL, Isotope and
Acoustica and books from Focal Press.

Find out more at http://upyouroutput.com

all best wishes,
Andy Farnell



signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] help needed to set the beginning of each generated sound in a .wav file

2016-11-01 Thread Andy Farnell
Hey Pablo,

Your score reader is whack! input is declared as a two dimensional array
and you only index one of its dimensions. I think you meant to treat that 
as an array of lines. Fix up the code under /* get values from score

Also your score.txt is inconsistent, theres extra blank lines and 
hexdump shows its got mixed \t and \n, your score parser wont enjoy that

cheers
Andy
 


On Tue, Nov 01, 2016 at 01:31:27PM +, Pablo Frank wrote:
> The .cpp below should read from the score.txt attached, the beginning, 
> duration, frequency and volume of 5 waves, and build a wave file.
> In the generated file each wave is built *after* the other in the order of 
> the score, instead of beginning at the time set in the 1st column of the 
> score.
> The loop containing the libsndfile function "sf_writef_float(psf, buffer, 
> 64);"  is evidently wrong and i don't succeed to build the correct loop.
> The .cpp run with libsndfile in windows.
> If someone can correct the code would be a bless (I can pay for it through 
> paypal). The .cpp and the .txt are here below and also attached.
> 
> #include 
> #include 
> #include 
> #include
> 
> ///LIBSNDFILE//
> 
> SNDFILE *soundout_open(char* name, int chans=1, float sr=44100){
>SF_INFO info;
>   info.samplerate = (int) sr;
>   info.channels = chans;
>   info.format = SF_FORMAT_WAV | SF_FORMAT_PCM_16;
>return sf_open(name, SFM_WRITE, &info);
> }
> void soundout_close(SNDFILE *psf_out){ sf_close(psf_out); }
> 
> 
> table for the oscilator
> 
> float* sinus_table(int length=1024, float phase=0.f){//int length=1024
> float* sinus_table(int length=1024, float 
> phase=0.f);
>float *table = new float[length+2];
>phase *= (float)3.141592*2;
>for(int n=0; n < length+2; n++)
>table[n] = (float) cos(phase+n*2*3.141592/length);
> 
>return table;
> 
> }
> 
> /oscilator/
> 
> float osc(float *output, float amp, float freq,float *table, float *index,int 
> length=1024, int vecsize=64, float sr=44100)
> {
> 
>// increment
>float incr = freq*length/sr;
> 
>// processing loop
>for(int i=0; i < vecsize; i++){
> 
> // truncated lookup
> output[i] = amp*table[(int)(*index)];
> 
> *index += incr;
> while(*index >= length) *index -= length;
> while(*index < 0) *index += length;
> 
>}
> 
> 
>return *output;
> 
> }
> 
> 
> int main(int argc, char** argv) {
> 
> // Score variables
> FILE *fp;
> float duration[1000],frequency[1000]; //duration and frequency of notes.
> int charcount = 0,r,i,j,k,l,m; //quantity of numbers in file
> char input[60][60];
> float instart[1000], inend[1000], inamp[1000],infreq[1000];
> float amp, freq, *wave, cs, ndx=0, ndx2=0;
> 
> // Wave file variables
> SNDFILE *psf;
> float *buffer;
> 
> if(argc == 3){
> 
> //Open and read score
> 
> if ((fp = fopen(argv[1], "r")) != NULL){
> 
> while(!feof(fp)){
> fscanf(fp, "%s", input[charcount]);
> charcount++;
> }
> }
> 
> else {
> printf("Cannot read file!\n");
> }
> 
> 
> // allocate buffer & table memory
> buffer = new float[64];
> wave =   sinus_table();
> 
> // now we open the file
> if(!(psf = soundout_open(argv[2]))){
> printf("error opening output file\n");
> exit(-1);
> }
> 
> /* get values from score*/
> for(i=4,r=0,k=1,l=2,m=3;i<6;i++){//loop for the full score///charcount/4 
> + 4
>fscanf(fp, "%s", input[k-1]);
>instart[k-1] = atof(input[k-1])*689;//44100/64 = 689, get values for 
> starting time*/
>fscanf(fp, "%s", input[k]);
>inend[k] = atof(input[k])*689;//44100/64 = 689,*get values for end of 
> sound*/
>fscanf(fp, "%s", input[l]);
>inamp[l] = atof(input[l]); /*get values for amplitud*/
>fscanf(fp, "%s", input[m]);
>infreq[m] = atof(input[m]);/*get values for frequency*/
> 
> //loop for each note
> for(int i=  0; i  for(int n=0; n < 64; n++) {
> buffer[n]=0;
> }
> sf_writef_float(psf, buffer, 64);
> 
> }
> for(int i=  0; i < inend[k]; i++){//duration of the sound
>  osc(buffer,inamp[l],infreq[m],wave,&ndx,1024,64);
>  sf_writef_float(psf, buffer, 64);
> }
> 
> j+=4,k+=4,l+=4,m+=4;
> 
> 
> }
> 
> // close file & free memory
> soundout_close(psf);
> delete[] buffer;
> delete[] wave;
> 
> return 0;
> }
> else {
> printf("usage: prog score.txt wave.wav \n");
> return 1;
> }
> }
> 
> 
> 
> beginning durationamplitudefrequency
> 
> 21.3440
> 
> .53.5330
> 25.7220
> 
> 32.9210
> 
> 53.51000
> 
> 7

Re: [music-dsp] Low Noise Power Supply for audio applications resources.

2016-09-15 Thread Andy Farnell
Hi Max

The usual steps are to use a three termimal regulator
of the 78 series, smoothing capacitors and perhaps
some inductive chokes if you are using USB or unregulated
supplies with high frequency noise on.

Normally you require the input supply to be at least a
couple of volts above the regulated potential you need
and components must be chosen with power/current
requirements in mind. There are many amazing new single
package power conditioning ICs areound these days. 

You really should take this to an analogue design list
as music-DSP isnt really the best place, and I am sure 
some of the other members can advise you on the best
group to post in. Perhaps start by hanging out on ##hardware
on freenode or similar and asking where you can chat about 
PSU design.

cheers
Andy

On Thu, Sep 15, 2016 at 11:43:52AM +, Max K wrote:
> Hi everyone,
> 
> 
> I'm still working on my digital guitar pedal and now I have to design a power 
> filtering stage to filter the noisy output of a 5V AC/DC power supply. I need 
> clean power for my analog components (pre-amps and DAC/ADCs) as well as the 
> ARM Cortex A9 based 1GHz digital circuitry. I want to "smoothen" the DC 
> current and get rid of all AC components (apparently the correct term for 
> this is "bypassing") and I need to decouple the digital circuit from the 
> analog circuit, so they don't interfere with each other (decoupling). In 
> addition, I have been told that a good ground is also important.
> 
> 
> I have been googling this topic and found some papers (e.g. this one 
> http://www.designers-guide.org/design/bypassing.pdf) but I'm also interested 
> if you - as fellow "audiophiles" - can point me to any literature covering 
> this topic or have some first hand advice. I tried searching the mailing list 
> archives (which is a pain really) and unfortunately the archives from before 
> August 2015 seem to be gone (404).
> 
> 
> Cheers,
> 
> Max
> 
> Power Supply Noise Reduction - Designer's 
> Guide
> www.designers-guide.org
> Power Supply Noise Reduction Reducing Inductance 4 of 12 The Designer's Guide 
> Community www.designers-guide.org inductance can be reduced by decreasing its 
> length ...
> 
> 

> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp



signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Wavelet Matching

2016-09-05 Thread Andy Farnell
Wavelets are not necessarily a part of this algorithm. The key components
are understanding;

 hashing
 sparse arrays
 red black tree search

As a staring lead you could begin searching on: 
"MIR Plumbley  Abdallah Fujihara Klapuri""

cheers,
Andy

On Mon, Sep 05, 2016 at 02:18:50AM +, Michael Feldman wrote:
> Hello All Music DSP members,
> I am SQL developer and I am interested to learn about the music algorithms.  
> Thank you for letting me join the email list and I discovered the archives so 
> I will be sifting those!
> I am researching models and I was interested to know if there were previous 
> open source algorithms similar to Shazam that could be brought to the public?
> I would like to find/create open source version of audio search within a 
> population audio set.  The goal is to have a find function. It will be able 
> to submit a population MP3 file. And small MP3 sample file that is short. The 
> delivery system will find times in population that the original sample 
> occurred.
> So for example if you submit "I have a dream" speech from start to end as the 
> population. And then submit one of the times he said word "dream" so the 
> output will show the time that exact pronunciation was sampled and other 
> times that sound (word) was used as slightly deviated when it was repeated 
> and calculate deviations.
> 
> Thank You,
> Michael

> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp



signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Oversized FFT windows --- why so little literature?

2016-08-26 Thread Andy Farnell

Hi Evan,

On Fri, Aug 26, 2016 at 03:17:25PM -0500, Evan Balster wrote:
> In the days since my first post, I've had a background train of thought
> going on about this...
> 
> So, the value of a DFT bin when using a window of size N is proportional to:
> [image: Inline image 2]
> Where omega is the complex frequency of the bin and K is an arbitrary time
> offset.

Its not a strict list etiquette here, but it would be helpful
to a whole lot of people and to posterity of archived posts
if you might use text. Either TeX or freehand style ascii equations
will do fine. Otherwise a whole lot gets lost in translation 
as you can see above.
many thanks
Andy Farnell  


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Lakh MIDI Dataset v0.1 released

2016-08-18 Thread Andy Farnell
Thanks for sharing this, its an important resource for MIR
research and app development. Since the demise of OLGA etc,
and the corporate purges on MIDI data its been hard to get 
good sets for experiments and development. Searching online 
for "Midi files" has become like entering a house of horrors.
Also nice to see the US NSF is still funding worthy projets.
a.

On Wed, Aug 17, 2016 at 08:50:27AM -0700, Colin Raffel wrote:
> Hi all,
> 
> I'm pleased to announce the release of the Lakh MIDI 
> Dataset v0.1 , a 
> collection of 176,581 unique MIDI files, 45,129 of which 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync ?

2016-08-13 Thread Andy Farnell

I do appreciate both perspectives here, on the one hand 
it is plainly overkill to talk about sub microsecond 
accuracy in music control systems. Yet on the other hand, 
its genuinely the case that many very modern, highly specified
PC sequencers still don't sound good because their complexity 
and multi-use function produces noticable timing jitter.
This has long plagued Linux, but to be honest I hear equal
criticisms on Mac and Windows software from those types of
dance music purists who still keep an Atari ST, Amiga or
Mac Classic for their MIDI sequencing. Maybe these guys are 
crazy, but I kinda feel its correct to respect judgements 
of people who make records day in day out and are sensitive to
such subtleties. Having a single thread, read loop, interrupt
driven and attached directly to UART has got to be hard to
beat. Its bascically a single function computer. 
A dance music producer I know who still uses ST in 2016
and says they will prize it out of his cold dead hands,
also thinks it sounds pretty loose compared to a Roland
sync-24 setup with 909 and etc

Andy


On Sat, Aug 13, 2016 at 10:06:18PM +0300, Sampo Syreeni wrote:
> On 2016-08-13, Theo Verelst wrote:
> 
> >For a class of applications where at least you would want sample
> >accurate control messages, [...]
> 
> That's not about music-dsp, but dsp simple. There's a reason why all
> synthesis architectures out there make distinction between
> modulation and audio rate events. The first are supposed to be
> humanly understandable and deliverable even in real time
> environments. The latter are part of the inner workings of your
> synthesis algorithm.
> 
> >[...] buffering for efficient pipeline and cache use dictates some
> >form of delay control, which IMHO should be such that from Midi
> >event to note coming out of the DAC, there is always a accurately
> >fixed delay.
> 
> Not that many of us perfectionistas wouldn't have been thinking
> about this problem from the start...
> 
> >So I though it might be a good idea to time stamp Midi messages
> >with an Fpga (I use a Xilinx Zynq), and built in some form of
> >timing scheduler in the FPGA to help the kernel.
> 
> That's plain overkill. All that you need is a well-synchronized
> realtime clock and a fast consensus algorithm. You can get the first
> over any extant Ethernet technology in controlled congestion state
> by using PTP ( https://en.wikipedia.org/wiki/Precision_Time_Protocol
> ). By rounding your events to the nearest microsecond or so,
> including time stamps, delaying your events a bit, and going with
> something like
> http://www.cse.buffalo.edu/~demirbas/publications/augmentedTime.pdf
> , you can approach perfection in latency and in fact attain it in
> local synchronization of the end result, quite without resorting to
> expensive hardware. Relatively cheap microcontrollers could keep up
> with that sort of thing any day of the week, without the total cost
> per node creeping past half that of a Pi.
> 
> >I'm not talking about a hardware Linux "select()" call as kernel
> >accellerator or single sample delay Fpga DSP, or upgrading to
> >dozens of Fpga pins at a hundred times Midi clock rate doing clock
> >edge-accurate timing, but an intersting beginning point for the
> >next generation of accurate DSP software and interfaces.
> 
> "Accurate DSP software and interfaces." What you're talking about is
> form beyond function. If you want to do some super-sensitive
> remotely gated high energy shit in the CERN vein, go ahead. This is
> what you need. But that doesn't have much to do with MIDI signals or
> audio, anymore. Certainly not music.
> -- 
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Supervised DSP architectures (vs. push/pull)

2016-08-02 Thread Andy Farnell
Dreaming about novel real-time DSP architectures... bottom up? 

I find this discussion and general problem of DSP architectures
suited to parallel computation exciting.
Its something I've pondered while considering a problem in
the implementation layer of procedural audio, which is 'level
of audio detail', simply the sound-design principle that not
every sonic detail needs computing perfectly all the time, that
good enough models can be 'computationally elastic'. 

Indeed in games, as Ethan F indicates in the above post,
material is often wide rather than deep, with lots of 
contributory signals, and some papers (search SIGGRAPH) have 
been written on perceptual prioritisation in games.

Of course a good solution is also one that allows dynamic 
reconfiguration of DSP graphs, but also one that seems to need
all the trapping of operating system principles, prioritisation,
critical path/dependency solving, cache prediction, scheduling,
cost estimation, a-priori metrics, etc. 

Although I kind of abandoned that line of thought, in honesty
due to the lazy thought that raw CPU capability would overtake
my ambitions, there are indeed certain sound models that are 
really rather hard to express within traditional DSP frameworks.
An example is fragmentation, as a recursive (forking) 'particle 
system' of ever smaller pieces, each  with decreasing level of 
detail. I imagine this elegantly expressed in LISP and easily 
executed on a multiple processors. And I can see other 
applications, perhaps for new audio effects that are adaptive to 
the changing complexity of incoming material.

But the fear I felt when thinking about "supervision" is twofold

1) We need reliable knowledge about DSP processes
   i) Order of growth in time and space
  ii) Anomolies, discontinuities, instablilities 
 iii) Significance (perhaps perceptual model)
 
.. and that knowledge might not be so reliable and consistent
As Ross said, some are not easily computable, and many of these 
issues in (Letz, Fober, Orlarey, Davis) the Jack paper just get
worse the more cores (and ICC paths) you add.
Again, this gets worse when the material is interactive, as in games,
and where you may want to adapt the level of audio detail 
on the fly.

2) That deep, synchronous audio systems are always 'brittle', one 
thing fails and everything fails, and at some point complexity and 
explicit rules at the supervisor level just get too much to create 
effects and instruments that are certain not to glitch during 
performance. 

Its like 'real-time' and massively concurrent dont mix well.

So I got wondering if _super_ vision is the wrong way of
looking at this for audio. Please humour a fool for a moment.
Instead of thinking like 'kernel', what can we learn from Hurd
and Minix? What can we learn from networking and massively 
concurrent asychronous systems that have failure built in as 
assumptions?

1) DSP nodes that can advertise capability
2) Processes that can solicit work with constraints
3) Opportunistic routing through available resources
4) Time to live for low priority contributory signals
5) Soft limits, cybernetics (correction and negative feedback)

So, if you were to think like 1960's DARPA and say " I want to 
construct a DSP processor based on nodes where many could be 
taken out by 'enemy action', and still get a 'good enough' signal 
throughput and latency" - what would that look like? 

Approaching this way, what you get probably looks horribly inefficient
for small systems where the inter-process bureaucracy dominates,
but really very scalable too, and doing better and better as the 
complexity increases rather than worse.

cheers,
Andy

 

On Mon, Aug 01, 2016 at 12:16:38PM -0500, Evan Balster wrote:
> Here's my current thinking.  Based on my current and foreseeable future
> use-cases, I see just a few conditions that would play into automatic
> prioritization:
> 
>- (A) Does the DSP depend on a real-time input?
>- (B) Does the DSP factor into a real-time output?
>- (C) Does the DSP produce side-effects?  (EG. observers, sends to
>application thread)
> 
> Any chain of effects with exactly one input and one output could be grouped
> into a single task with the same priority.  Junction points whose sole
> input or sole output is such a chain could also be part of it.
> 
> This would yield a selection of DSP jobs which would be, by default,
> prioritized thus:
> 
>1. A+B+C
>2. A+B
>3. A+C
>4. B+C
>5. B
>6. C
> 
> Any DSPs which do not factor into real-time output or side-effects could
> potentially be skipped (though it's worth considering that DSPs will
> usually have state which we may want updating).
> 
> It is possible that certain use-cases may favor quick completion of
> real-time processing over latency of observer data.  In that case, the
> following scheme could be used instead:
> 
>1. A+B (and A+B+C)
>2. B+C
>3. B
>4. A+C
>5. C
> 
> (Where steps 4 and 5 may occur after the ca

Re: [music-dsp] idealized flat impact like sound

2016-07-28 Thread Andy Farnell
Following the comments regarding the exponential 
modulated noise segment;

My experience is that all such actual segments will be
spectrally coloured, because of course they contain
a truncated set of random values.

The only theoretically "flat" exciter is the Dirac impulse.

But because it contains so little energy its not that
practical for stimulating waveguides.

Better to construct a band-limited pulse from a finite 
set of sinusoids right up to the Nyquist.
A problem is this will have a finite rise time.

A practical compromise I found is to use the exponential
decay segment, as it is, without a payload, and make it
jolly short. I guess as T -> 0 the behaviour tends towards
the Dirac pulse, but where T is just a few tens of samples
it works as a very clean, reliable exitor for waveguides.
(Indeed this is what you have in a lot of analogue percussion
synthesis)

Perhaps someone can show you what the spectrum is as
a function of T, its not "flat" but its a good trade off
between a theoretically perfect impulse and a practical 
signal.

cheers,
Andy

On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:
> 
> Hi
> 
> I want to create a signal thats similar to a reverberant knocking or
> impact sound,
> basically decaying white noise, but with a more compact onset
> similar to a minimum phase signal
> and spectrally completely flat.
> 
> I am aware thats a contradiction.
> 
> Both, minimum phase impulse and fading random phase white noise are
> unsatisfactory.
> The minimum phase impulse does not sound reverberant.
> 
> The random phase noise isn't strictly flat anymore when you window
> it with an exponentially decaying envelope
> and also lacks a knocking impression.
> 
> I am also aware that a knocking impression comes from formants and
> pronounced modes
> related to shapes and material and not flat, which is another
> contradiction..
> 
> I am not sure what the signal or phase alignment is I am looking for.
> 
> Also it's not a chirp cause a chirp sounds like a chirp.
> 
> What happens in a knock/impact besides pronounced modes or formants?
> Somehow the phases are aligned it seems, similar to minimum phase
> but then its
> also random and reverberant.
> 
> 
> Any ideas?
> 
> 
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] confirm 29f9d07aca460a7584879c1831b9e3298c4

2016-07-28 Thread Andy Farnell
This kind of unsubscribe abuse is unfortunately difficult
to handle. Password protected unsubscribe fails because
geniune unsubscribers almost always forget their password.
I started to look at the GNU Mailman documentation.
There are some options to discuss with both Doug and Robert. 
Meanwhile the less on-list noise and speculation the better, 
as someone is likely enjoying the attention.

cheers
Andy

On Thu, Jul 28, 2016 at 09:40:22AM +0100, gwenhwyfaer wrote:
> Perhaps it would be as well for the unsubscribe function on the list
> management page to be parked behind a password, rather than accessible
> without? At the moment, anyone can do this kind of drive-by
> unsubscription of anyone whose email address they have, without even
> going to the trouble of spoofing that email address - just type it
> into the webpage and click twice. Hiding it behind a password would at
> least make it a little bit more difficult.
> 
> -- gwenhwyfaer
> 
> 
> 
> On 28/07/2016, Stefan Stenzel  wrote:
> > Robert is the gist of this list, he can rant, spam and complain as he
> > pleases, his mails are either very informative or funny, mostly both.
> >
> > You, Bruno, have not contributed anything besides your recent oeuvre which
> > is neither related to music nor suitable to sustain the considerate way we
> > use to communicate here.
> >
> > Stefan
> >
> >
> >> On 28 Jul 2016, at 1:39 , Bruno Afonso  wrote:
> >>
> >> Could you please stop spamming the list? Much appreciated
> >>
> >> On Wed, Jul 27, 2016 at 16:09 robert bristow-johnson
> >>  wrote:
> >> sorry, i just ain't getting the hint.
> >>
> >> i'm sorta dense that way.
> >>
> >>
> >>
> >>  Original Message
> >> 
> >> Subject: confirm 29f9d07aca460a7584879c1831b9e3298c4
> >> From: music-dsp-requ...@music.columbia.edu
> >> Date: Wed, July 27, 2016 10:37 am
> >> To: r...@audioimagination.com
> >> --
> >>
> >> > Mailing list removal confirmation notice for mailing list music-dsp
> >> >
> >> > We have received a request for the removal of your email address,
> >> > "r...@audioimagination.com" from the music-dsp@music.columbia.edu
> >> > mailing list. To confirm that you want to be removed from this
> >> > mailing list, simply reply to this message, keeping the Subject:
> >> > header intact. Or visit this web page:
> >> >
> >> > https://lists.columbia.edu/mailman/confirm/music-dsp/29f9d07aca460a7584879c1831b9e3298c4
> >> >
> >> >
> >> > Or include the following line -- and only the following line -- in a
> >> > message to music-dsp-requ...@music.columbia.edu:
> >> >
> >> > confirm 29f9d07aca460a7584879c1831b9e3298c4
> >> >
> >> > Note that simply sending a `reply' to this message should work from
> >> > most mail readers, since that usually leaves the Subject: line in the
> >> > right form (additional "Re:" text in the Subject: is okay).
> >> >
> >> > If you do not wish to be removed from this list, please simply
> >> > disregard this message. If you think you are being maliciously
> >> > removed from the list, or have any other questions, send them to
> >> > music-dsp-ow...@music.columbia.edu.
> >> >
> >> >
> >>
> >>
> >> --
> >>
> >>
> >>
> >> r b-j  r...@audioimagination.com
> >>
> >>
> >>
> >> "Imagination is more important than knowledge."
> >>
> >> ___
> >> dupswapdrop: music-dsp mailing list
> >> music-dsp@music.columbia.edu
> >> https://lists.columbia.edu/mailman/listinfo/music-dsp
> >> ___
> >> dupswapdrop: music-dsp mailing list
> >> music-dsp@music.columbia.edu
> >> https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
> > ___
> > dupswapdrop: music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
> >
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] idealized flat impact like sound

2016-07-27 Thread Andy Farnell
For impact/contact exciters you will find plenty 
of empirical studies and theoretical models in the 
literature by;

Davide Rocchesso
Bruno Giodano
Perry Cook

These are good initial paper authors to search 

all best
Andy Farnell



On Wed, Jul 27, 2016 at 07:00:02PM +0200, gm wrote:
> 
> Hi
> 
> I want to create a signal thats similar to a reverberant knocking or
> impact sound,
> basically decaying white noise, but with a more compact onset
> similar to a minimum phase signal
> and spectrally completely flat.
> 
> I am aware thats a contradiction.
> 
> Both, minimum phase impulse and fading random phase white noise are
> unsatisfactory.
> The minimum phase impulse does not sound reverberant.
> 
> The random phase noise isn't strictly flat anymore when you window
> it with an exponentially decaying envelope
> and also lacks a knocking impression.
> 
> I am also aware that a knocking impression comes from formants and
> pronounced modes
> related to shapes and material and not flat, which is another
> contradiction..
> 
> I am not sure what the signal or phase alignment is I am looking for.
> 
> Also it's not a chirp cause a chirp sounds like a chirp.
> 
> What happens in a knock/impact besides pronounced modes or formants?
> Somehow the phases are aligned it seems, similar to minimum phase
> but then its
> also random and reverberant.
> 
> 
> Any ideas?
> 
> 
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] up to 11

2016-06-23 Thread Andy Farnell
On Wed, Jun 22, 2016 at 01:40:45PM -0700, Duino wrote:
> 
> This is an old problem, since the 70s, in SSB transmission.
> Specifically driven by 'hams' that want to be heard around the world.
> There have been excellent analog solutions since the late 80s.
> In order to be heard, you need to be loud in the receiving end, and in

Presumably optimised for speech. When I worked in broadcast 
audio we had a rack with "optimod" boxes at the BBC, but IIRC
they were full of clever stuff to be adaptive to programme
material. 

The whole loudness war thing really came alive for me after
seeing an AES lecture by mastering engineer Darcy Proper. 
She showed how multiband dynamic maximisers do not "add"
when chained at different stages in a process, but rather 
they lead to really counter-intuitive non-linear behaviour
that defeats and even reverses the aim of making the signal
louder and the intentions of the artist.

best,
Andy


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-12 Thread Andy Farnell
Great to follow this Ross, even with my weak powers of math
its informative.

So, just an application note: of course the idea of "cheap" oscillators
with interesting band limited waveforms, that require no more
than a phasor and arithmetic (multiplies, integer powers, etc) is
a goal.

I did some experiments with Bezier after being hugely inspired by
the sounds Jagannathan Sampath got with his DIN synth.
(http://dinisnoise.org/)
Jag told me that he had a cute method for matching the endpoints
of the segment (you can see in the code), and listening, sounds
seem to be alias free, but we never could arrive at a proof of
that.

Now I am revisiting that territory for another reason and wondering
about the properties of easily computed polynomials again.

all best,
Andy


n Sun, Jun 12, 2016 at 02:57:41PM +1000, Ross Bencina wrote:
> On 12/06/2016 3:05 AM, Andy Farnell wrote:
> >Does it make any sense to talk about the "spectrum of a polynomial"
> >over some (periodic) interval (less than infinity)?? Or is that
> >silly talk?
> 
> For the infinite interval:
> 
> Expanding the definition of the Fourier transform, for polynomial p:
> 
> P(w) = integral -infinity to infinity p(x) [e^(-2 pi i x w) ] dx
> 
> w is a real number.
> 
> This integral diverges as a Riemann integral. The Cauchy Principle
> value for polynomials of strictly odd order is zero. I don't know
> whether there's another theory of integration where this Fourier
> integral would make sense.
> 
> Looking at transform 308 in the tables here:
> https://en.wikipedia.org/wiki/Fourier_transform
> 
> It appears that if you know about distribution theory (I don't) and
> the derivatives of the Dirac delta you might be able to make sense
> of it.
> 
> But clearly a polynomial over an infinite interval is not going to
> make for a very useful signal :)
> 
> ~
> 
> For a finite (non-periodic) interval, you're essentially talking
> about a polynomial windowed with a rectangular window. Such a
> function has finite support, so the Fourier integral can be
> evaluated over a finite interval.
> 
> Consider an integral related to the Fourier integral for James'
> function with a = 2, b = 1, i.e. f(x) = (1-x^2) (the Welch window).
> 
> F(k) = integral -1 to 1 [(1-x^2)e^(ikx)] dx
> 
> Integration by parts (twice) yeilds:
> 
> F(k) = (-4/k^2)cos(k) + (4/k^3)sin(k)
> 
> For higher powers of a and b, you'll end up with a cascade of
> roughly ab integration by parts, and the decay of the transform is
> something like 1/k^ab.
> 
> ~
> 
> Now, how best to evaluate the Fourier integral for a repeated
> (periodic) polynomial segment?
> 
> Cheers,
> 
> Ross.
> 
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-11 Thread Andy Farnell

Hi Ross,

Thanks, a great explanation. I had not seen that the function was
to be used as a transfer for shaping a sinusoid, now the 
upper bound Robert gave makes sense.

Does it make any sense to talk about the "spectrum of a polynomial"
over some (periodic) interval (less than infinity)?? Or is that
silly talk? 
cheers,
Andy





On Sat, Jun 11, 2016 at 10:24:15PM +1000, Ross Bencina wrote:
> Hi Andy,
> 
> On 11/06/2016 9:16 PM, Andy Farnell wrote:
> >Is there something general for the spectrum of all polynomials?
> 
> I think Robert was referring to the waveshaping spectrum with a
> sinusoidal input.
> 
> If the input is a (complex) sinusoid it follows from the index laws:
> 
> (e^(iw))^2 = e^(i2w)
> 
> In excruciating detail*:
> 
> Consider the expansion of (1-z)^b (use the binomial theorem). The
> highest power of z in the expansion will be z^b.
> 
> e.g. for b = 3:
> 
> (1-z)^3 = -z^3 + 3z^2 - 3z + 1
> 
> Similarly, for f(z) = (1-z^a)^b, the highest power of z will be ab.
> (Not sure where Robert got a|b| from though).
> 
> e.g. for a = 2, b = 3:
> 
> (1-z^2)^3 = -z^6 + 3z^4 - 3z + 1
> 
> 
> Now, assume that the input is sinusoidal:
> 
> Let z = e^(iw), with w being oscillator phase.
> 
> Then z^(ab) = (e^(iw))^(ab) = e^(iabw).
> 
> So, e.g. for a = 2, b = 3:
> 
> Hence highest harmonic will be ab above the base frequency.
> 
> (1-(e^(iw))^2)^3
>= -(e^(iw))^6 + 3(e^(iw))^4 - 3(e^(iw)) + 1
>= -e^(i6w) + 3e^(i4w) - 3e^(iw) + 1
> 
> 
> 
> 
> I don't know whether there is a closed-form expression for the
> spectrum of James' window functions, windowed over [-1, 1].
> 
> 
> Greetings from Down Under,
> 
> Ross.
> 
> [*] As always, I could be wrong about this.
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
> 


signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] a family of simple polynomial windows and waveforms

2016-06-11 Thread Andy Farnell
It is very elegant.

Robert, how did you get to that band limit calculation?
Is there something general for the spectrum of all polynomials?

cheers
Andy


On Sat, Jun 11, 2016 at 12:52:44AM -0400, robert bristow-johnson wrote:
> 
> 
> 
> 
> 
> 
> 
>  Original Message 
> 
> Subject: Re: [music-dsp] a family of simple polynomial windows and waveforms
> 
> From: "Ross Bencina" 
> 
> Date: Sat, June 11, 2016 12:08 am
> 
> To: music-dsp@music.columbia.edu
> 
> --
> 
> 
> 
> > Nice!
> 
> >
> i agree. ?and no harmonics beyond than the (|b|*a)-th harmonic.
> 
> 
> > On 11/06/2016 11:31 AM, James McCartney wrote:
> 
> >> f(x) = (1-x^a)^b
> 
> >
> 
> > Also potentially interesting for applying waveshaping to quadrature
> 
> > oscillators:
> 
> >
> 
> 
> 
>  Original Message 
> 
> Subject: [music-dsp] a family of simple polynomial windows and waveforms
> 
> From: "James McCartney" 
> 
> Date: Fri, June 10, 2016 9:31 pm
> 
> To: music-dsp@music.columbia.edu
> 
> --
> 
> 
> 
> > fun with math:
> 
> >
> 
> > You can create a family of functions, which can be used as windows, LFO
> 
> > waves or envelopes from the formula:
> 
> >
> 
> > f(x) = (1-x^a)^b
> 
> >
> 
> > evaluated from x = -1 to +1
> 
> >
> 
> > where 'a' is an even positive integer and 'b' is a positive integer.
> 
> >
> 
> > 'a' controls the flatness of the top and 'b' controls the end tapers.
> some more fun with math:
> the integral of f(x) with a=2 gives you an odd-symmetry, odd-order polynomial 
> that is as linear as it can be at x=0, splices to saturation at |x|=1, and is 
> continuous in as many
> derivatives as possible (i think the number is 2b) at the splice, and has all 
> derivatives continuous everywhere else (including, of course, the 0th 
> derivative).
> like the smoothest possible soft-clipping. ?i might have posted that tidbit 
> here before, but i can't remember.
> you can get
> the coefficients for the integrated f(x) using binomial expansion and then a 
> simple term-by-term anti-derivative.
> it's a polynomial i had my eye on for a while, too. ?might also be good for a 
> splicing function (if you offset and scale it) for constant-voltage splices 
> between
> well-correlated audio. ?if not well-correlated, there that other thing i 
> posted a few years back that Olli N had also.
> 
> 
> 
> --
> ?
> 
> 
> r b-j ? ? ? ? ? ? ? ? ?r...@audioimagination.com
> ?
> 
> 
> "Imagination is more important than knowledge."

> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp



signature.asc
Description: Digital signature
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Creating and maintaining digital signal processing graphs

2016-05-10 Thread Andy Farnell
Old-skool method, but effective: If you build an environemnt by
issuing connections on command line you can replay them from a script.
I have successfully used simple script command to capture
a command line history and then use it to start-up a project again.

Andy  

On Tue, May 10, 2016 at 02:26:13AM +0200, Theo Verelst wrote:
> Hi all.
> 
> Just a little thought about something I regularly encounter and that is 
> interesting me.
> 
> Working with Linux and Jack/Ladspa (usually at 192kHz/32bit) I make
> moderately complicated let's call them "patch graphs" of software
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] entropy

2014-10-20 Thread Andy Farnell
On Mon, Oct 20, 2014 at 10:00:13AM -0700, Ethan Duni wrote:

> Meanwhile, I'll point out that it's been a long time since anybody on this
> thread has even attempted to say anything even tangentially related to
> music dsp. 


The first thing that came to my mind after seeing Peter's image
processing examples was " what would that sound like applied to,
say, a noisy conversation?"

Would it fare better or worse with different kinds of noise?
(rhetorical)

And, maybe more interesting... would that property itself lead to
side applications of a musically generative kind?

best,
Andy


-- 
Stop reading here
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] HTML

2013-08-09 Thread Andy Farnell

Since the problem with HTML is on the sender/input side
(some people being unable _not_ to send HTML, and text/plain
being a subset of the HTML) then it's likely a procmail
filter in mailman can make life easier for more senders
without pissing off the established readers who use text
only capability. See for example;


http://permalink.gmane.org/gmane.mail.procmail/8697
http://board.issociate.de/thread/394999/Reliable_html2plain/text_conversion_using_procmailrc.html

cheers,
Andy

-- 
Stop reading here
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Thesis topic on procedural-audio in video games?

2013-03-05 Thread Andy Farnell
Of the many directions I have discussed with my masters and
doctoral students it's possible to see a breakdown into several
categories with some overlap into HCI, CG, SID and DSP.

The application of a given synthesis method to a specific class
of sounding object.

Behavioural or interaction models. Sonic interaction design overlap.

Frameworks and tool-chains. 

Parallelism, concurrency and modularity.

Replication, consistency and detail in network games.

Level of audio detail, perceptual models.

Statistical methods, dictionaries and grain databases.

Orthogonally, I would offer the taxonomy given in Designing Sound
as a useful second lens

Linear and non-linear dynamics
 Collision, rolling, crushing, fragmentation, friction and slip models
Natural environments, textures
Machina, sonic aspects of human design
Biological models, animals, speech and bioacoustics
High energy events (explosion, thunder, supersonic shock)

In addition, a more MA or arts PhD may look at the relation
to traditional sound design viz meaning, emotion, metaphor
and simile, power dynamics and/or the cultural implications
of deferred or displaced artistic forms.

This only scratches the surface, but these are fairly high level
problem views.

best
Andy




On Tue, Mar 05, 2013 at 09:08:43AM +0100, Danijel Domazet wrote:
> Hi mdsp, 
> We need a masters thesis topic on procedural audio in video games. Does
> anyone have any good ideas? It would be great if we could afterwards
> continue developing this towards a commercial products.  
> 
> Any advice most welcome. 
> 
> Thanks!
> 
> Danijel Domazet
> LittleEndian.com
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] RE : RE : TR : Production Music Mood Annotation Survey 2013

2013-02-22 Thread Andy Farnell



Hi Mathieu,

I think, as research goes,  you've struck gold. 

It might not serve the immediate goals, or feel like it right
now, but frequently where you encounter strong opinions in a line of 
enquiry it verifies the work has value, although you may need to look hard at
what that value is.

With the opinions I've read here, I meainly agree that the survey
is a blunt instrument. No doubt you know that and wish it could
be otherwise. Though I generally discourage my students from
using such tools,  sometimes it is all we have, for practical
reasons of cost and sadly, increasingly in universities, for
reasons of administrative red tape that inhibits many "real"
experiments from being done.

But since the initial research reveals critical opinion
about the very natre of the experiment, might it be time to
pause for thought and see if you're actually sitting on a much 
more interesting question?

BTW, I will be over at QMUL this afternoon taking
a lecture with Andrew McPhersons DAFx group, perhaps this
conversation could continue in the Half Moon after 16:00 ?

all best
Andy





On Fri, Feb 22, 2013 at 05:33:34AM +, mathieu barthet wrote:
> May I...
> 
> On 20/02/2013 16:12, Richard Dobson wrote:
> .. So were I to do
> > the survey, I fear I might be guilty of some mischief.
> >
> 
> > So in fact, I need not have worried - mischief is built into the system!
> 
> > Richard Dobson
> 
> @Richard: If I understand well your points, I believe not quite. I believe 
> you should have said:
> 
> "So in fact, I need not have worried - *I believe* mischief is built into the 
> system!"
> 
> A bon entendeur, salut! (I believe that even speaking English is biased as 
> it's not my natural language so using French there makes sense to me).
> 
> @all: I may not answer all the questions on the experiments arouse by the 
> discussion for sake of time but I thank those of you who made interesting 
> comments.
> 
> Mathieu Barthet
>  
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] M4 Music Mood Recommendation Survey

2013-02-21 Thread Andy Farnell

I have noticed Ross, that I tend to seek out music that reflects
an already emerging emotion, such that the music then precipitates 
a physiological emotion. If I am in the mood to be excited by
Bizet or the Furious Five MCs, then Radiohead or Gorecki
cannot sadden me. And conversely. The longer I have
studied music the more it seems plausible; music does not
drive emotion, emotion drives music. At such times as we 
encounter cultural supposition, as in a film score, the music 
may resonate more strongly with expectation and "work its
magic on us", but the emotion is not extempore in the music,
it lives in the listener. Production music, as a choice of score
to complement activity is therefore a question of good fit.
Such a gifted composer or music supervisor chooses carefully,
informed by understanding of narrative context. The disagreement
with some MIR projects, if indeed it is their mistake, is that
they presume music to be the driver and suppose a strict causality.
This makes many industry investors, mainly advertisers, very excited
where they assume a manipulative (a la Bernays/Lippmann) application.
I find many sensitive artists are repelled by this idea, not
from a sense of instrumental reason displacing the artist, but from 
an understanding that the listener is not a passive subject
amenable to a behaviourist interpretation. 

best @ all
Andy



On Fri, Feb 22, 2013 at 10:19:02AM +1100, Ross Bencina wrote:
> 
> 
> On 22/02/2013 9:54 AM, Richard Dobson wrote:
> >"Listen to each track at least once and then select which track is the
> >best match with the seed. If you think that none of them match, just
> >select an answer at random.
> >"
> >
> >Now I am no statistician, but with only four possible answers offered
> >per test, and with "none of the above" excluded as an answer (which
> >rather begs the question...),
> 
> You mean the one about adding to the large number of studies
> offering empirical evidence in support of the assumption?
> 
> 
> """However, despite a recent upswing of research on musical emotions
> (for an extensive review, see Juslin &Sloboda 2001), the literature
> presents a confusing picture with conficting views on almost every
> topic in the field.1 A few examples may suffice to illustrate this
> point: Becker (2001, p. 137) notes that “emotional responses to
> music do not occur spontaneously, nor ‘naturally’,” yet Peretz
> (2001, p. 126) claims that “this is what emotions are: spontaneous
> responses that are dif?cult to disguise.” Noy (1993, p. 137)
> concludes that “the emotions evokedby music are not identical with
> the emotions aroused by everyday, interpersonal activity,” but
> Peretz (2001, p. 122) argues that “there is as yet no theoretical or
> empiricalr eason for assuming such specifcity.” Koelsch (2005,p.
> 412) observes that emotions to music may be induced “quite
> consistently across subjects,” yet Sloboda (1996,p. 387) regards
> individual differences as an “acute problem.” Scherer (2003, p. 25)
> claims that “music does not induce basic emotions,” but Panksepp and
> Bernatzky(2002, p. 134) consider it “remarkable that any medium
> could so readily evoke all the basic emotions.” Researchers do not
> even agree about whether music induces emotions: Sloboda (1992, p.
> 33) claims that “there is a general consensus that music is capable
> of arousing deep and signifcant emotions,” yet Konec?ni (2003, p.
> 332) writes that “instrumental music cannot directly induce genuine
> emotions in listeners.” """
> 
> http://www.psyk.uu.se/digitalAssets/31/31194_BBS_article.pdf
> 
> 
> Ross
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Precision issues when mixing a large number of signals

2012-12-11 Thread Andy Farnell
On Mon, Dec 10, 2012 at 01:39:48PM +1100, Ross Bencina wrote:

> avoid any loss of precision due to truncation... etc. There is also
> arbitrary precision arithmetic if you don't want to throw any bits
> away. 

This seemed most pertainent to Alessandro's requirement that N was unknown
and might become very large. Although I cannot imagine the exact application
for a varying and unbounded number of signal sources where you also need 
to potentially know the sum divided by N at _any_ step to a perfect accuracy,
then variable length fixed point seems the way to go. If you can afford the 
space of adding a bit on every step of the accumulation then accumulate and 
shift right without truncation will keep arbitrary precision and magnitude. 
At some point however I guess you need to turn that into a more modest
representation for some real hardware, and you defer some cost till that
time.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Please consider this Parallella supercomputer kickstarter

2012-10-27 Thread Andy Farnell

Wonderful news. 

Occam, that brings back some memories from CS class.

Now, I just hope the Parallela can deliver better than the
Raspberry Pi did. In my rather frank opinion, the Raspberry Pi
has been horribly, horribly managed and stands as a model of
an inherent tension in crowd driven projects, that the hype
machine and the supply capacity cannot be reconciled. 

One whole year down the line and I am still struggling to get 
boards for my students, or even for myself for presentations
and courseware development. Every journalist, blogger and fanboy 
on the planet has one of these things, but those of us to 
whom the Raspberry Pi was ostensibly targeted, lecturers, course
developers, school teachers, educational groups, have been effecively
frozen out. I am reduced to buying them from Maplin in the UK
for £70 because that is the only reliable supplier.

And I even heard a rumour that they had sold out manufacture to 
Sony, so now I guess they come with a free rootkit installed.

/grump

Andy

  



On Sat, Oct 27, 2012 at 11:30:52AM +0100, Richard Dobson wrote:
> I have just had an email via the Occam-Pi mailing list (home base is
> Kent University) that the Parallela ~has~ been "successfully crowd
> funded" - there was "a huge surge with 12 hours to go".
> 
> Richard Dobson
> 
> On 26/10/2012 21:11, Charles Henry wrote:
> >(because I guess I'm an NVIDIA whore?)  I'm skeptical of the Adapteva
> >claim of 50 GFLOPS per Watt on a system that only has 16 cores.  The
> >top of the line NVIDIA cards will only do 13 GFLOPS per Watt (single
> >precision).  No way are you going to get better power performance than
> >GPU's with very large numbers of cores.
> >
> >I get that it's a pipelined RISC chip, which sounds a lot like the IBM
> >Power series (I guess the modern equivalent is their BlueGene
> >products).  See the green500 rankings to get a better picture on what
> >energy efficient supercomputers are like.  For example, here
> >http://www.green500.org/lists/green201206
> >The top efficiency for a supercomputer comes in at 2 GFLOPS per Watt.
> >
> >I won't say that alternatives to the Parallela project are cheap--but
> >the processor sounds almost too good to be true.  Once you add in the
> >power requirements for the infrastructure built around that smokin
> >fast multi-core processor, the total FLOPS per watt figure will
> >decrease too.
> >
> >I for one have more computers than I care to learn how to program for 
> >already.
> >
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Please consider this Parallella supercomputer kickstarter

2012-10-26 Thread Andy Farnell

Any other comments in defence, cos I'm sitting here with my money
about to invest. :/

Andy


On Fri, Oct 26, 2012 at 12:12:23PM -0700, Eric Brombaugh wrote:
> I've been discussing this Adapteva outfit's Kickstarter with a bunch
> of other embedded processing / ASIC / DSP guys and it just doesn't
> add up. For $99 they're promising a devboard which in addition to
> their parallel processing chip also contains a Xilinx Zync SoC that
> costs more than $200 in 1000pc quantity today. Assuming a low-ball
> BOM of $300/ea, that means they're going in the hole roughly $600k
> to build the ~3000 boards that they're currently promising.
> Considering that the $750k Kickstarter has to fund not only the
> devboards but also the NRE costs for the parallel chip production,
> it's hard to see how the numbers add up.
> 
> Since it looks like they're closing in on the goal at a rate that
> will put them over the top in a few hours, it's likely that this one
> will fund. This means that if their business plan isn't viable, the
> backers are out their pledges unless the company works out a refund
> deal.
> 
> I'd love to see this succeed - it's a neat idea and would be useful
> in a lot of areas, including music DSP. Something about it seems
> hinky though...
> 
> TLDR: Approach with caution.
> 
> Eric
> 
> On 10/26/2012 08:14 AM, Mike wrote:
> >Hey, and sorry if this is spammy or otherwise inappropriate, but I just
> >heard about this kickstarter campaign to make a 16-core (and 64-core
> >coming soon) parallel floating point "supercomputer" running at 1 GHz
> >(!), seems like just the ticket for a lot of music/audio algorithms, and
> >free open source development tools.
> >
> >http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone
> >
> >
> >They are very close and it seems like great technology and you can get
> >an actual system for a fairly modest pledge.
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] need help with gnuplot and octave on my mac.

2012-10-26 Thread Andy Farnell



Hi Robert,

Not sure I can help because I am not a Mac user, but I do use Octave
and Gnuplot fairly regularly.

I wonder is there systematic change in the way Mac supports X graphics?

One thought, I stopped using environment variables with Gnuplot
and always use  a  .gnuplot config (according to man gnuplot it doesn't
need any ENV variables and prefers the .gnuplot file)

best
Andy


On Fri, Oct 26, 2012 at 05:03:14AM -0700, robert bristow-johnson wrote:
> 
> say, any among you using a Mac and Octave and gnuplot?  i used to be
> able to plot with Octave, it would start up X11 and if i set the
> variable GNUTERM=x11 before starting Octave this would work.  now it
> doesn't :-(
> 
> anybody know what i'm doing wrong?  i could use some help.  thanks for any.
> 
> r b-j
> 
> 
> 
> Last login: Wed Dec 31 16:01:14 on console
> Roberts-PowerBook-G4-15:~ Robert$ GNUTERM=x11
> /Applications/Octave.app/Contents/Resources/bin/octave
> GNU Octave, version 3.2.3
> Copyright (C) 2009 John W. Eaton and others.
> This is free software; see the source code for copying conditions.
> There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or
> FITNESS FOR A PARTICULAR PURPOSE.  For details, type `warranty'.
> 
> Octave was configured for "powerpc-apple-darwin8.11.1".
> 
> Additional information about Octave is available at http://www.octave.org.
> 
> Please contribute if you find this software useful.
> For more information, visit http://www.octave.org/help-wanted.html
> 
> Report bugs to  (but first, please read
> http://www.octave.org/bugs.html to learn how to write a helpful report).
> 
> For information about changes from previous versions, type `news'.
> 
> octave-3.2.3:1> x=linspace(-1,1);
> octave-3.2.3:2> y=x.^2;
> octave-3.2.3:3> plot(x,y);
> dyld: Library not loaded: /usr/X11/lib/libfreetype.6.dylib
>   Referenced from: /usr/X11R6/lib/libfontconfig.1.dylib
>   Reason: Incompatible library version: libfontconfig.1.dylib
> requires version 13.0.0 or later, but libfreetype.6.dylib provides
> version 10.0.0
> dyld: Library not loaded: /usr/X11/lib/libfreetype.6.dylib
>   Referenced from: /usr/X11R6/lib/libfontconfig.1.dylib
>   Reason: Incompatible library version: libfontconfig.1.dylib
> requires version 13.0.0 or later, but libfreetype.6.dylib provides
> version 10.0.0
> /Applications/Gnuplot.app/Contents/Resources/bin/gnuplot: line 71:
> 1014 Trace/BPT trap  GNUTERM="${GNUTERM}"
> GNUPLOT_HOME="${GNUPLOT_HOME}" PATH="${PATH}"
> DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH}" HOME="${HOME}"
> GNUHELP="${GNUHELP}" DYLD_FRAMEWORK_PATH="${DYLD_FRAMEWORK_PATH}"
> GNUPLOT_PS_DIR="${GNUPLOT_PS_DIR}" DISPLAY="${DISPLAY}"
> GNUPLOT_DRIVER_DIR="${GNUPLOT_DRIVER_DIR}"
> "${ROOT}/bin/gnuplot-4.2.6" "$@"
> error: you must have gnuplot installed to display graphics; if you
> have gnuplot installed in a non-standard location, see the
> 'gnuplot_binary' function
> octave-3.2.3:4>
> /Applications/Gnuplot.app/Contents/Resources/bin/gnuplot: line 71:
> 1012 Trace/BPT trap  GNUTERM="${GNUTERM}"
> GNUPLOT_HOME="${GNUPLOT_HOME}" PATH="${PATH}"
> DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH}" HOME="${HOME}"
> GNUHELP="${GNUHELP}" DYLD_FRAMEWORK_PATH="${DYLD_FRAMEWORK_PATH}"
> GNUPLOT_PS_DIR="${GNUPLOT_PS_DIR}" DISPLAY="${DISPLAY}"
> GNUPLOT_DRIVER_DIR="${GNUPLOT_DRIVER_DIR}"
> 
> -- 
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DAFx 2012 - Software Development for Audio and Music Researchers Tutorial

2012-10-20 Thread Andy Farnell
On Sat, Oct 20, 2012 at 03:25:29PM -0400, Michael Gogins wrote:

> Please if you write code as part of your research, use C or C++ as
> these are the standard systems programming languages for commercial,
> government, and scientific research in critical and numerical systems.


Space is often a limitation, in published column inches, for papers.
Actually I would love to see more plain C/C++ too, but often it will
be many classes and requisite functions that would run to many pages.
Often we get a formula and pseudocode, sometimes it is often enough 
to code from. I think what the CDM guys (Mark +  Chris Cannam and 
Simon Dixon) were shooting at was viable supplemental resources to 
foster reproducibility.

Universities seem to _appallingly_ bad at maintaining web resources.
Ironic given the early educational impetus of the internet, is the
road is strewn with bones of dead links to personal home directories
and defunct research groups. Rather than maintaining them, Universities,
often ruled by their IT departments, vandalise this treasure by following
asinine policies for the sake of a few megabytes of disk space.


> I have problems with Matlab code not because I don't understand it but
> because I myself do not have a license. I would prefer that
> researchers using such systems use open source alternatives.

Yes, we must accept Matlab, as it is the modern Fortran and conceptually 
compact for DSP, particularly in teaching. Unfortunately the compatability 
with Octave is not good, at least it's a constant struggle. So I would also
make the plea 

 ** Please use OCTAVE, not MATLAB *** :)


> Of course I would love to find papers and code on the author's Web
> site. Many authors do do this and I find this immensely helpful.

IMHO it is the mark of a career professional, in it for the long game,
that they maintain a personal website with a catalogue of their work.

best
Andy

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DAFx 2012 - Software Development for Audio and Music Researchers Tutorial

2012-10-20 Thread Andy Farnell

On Sat, Oct 20, 2012 at 04:17:29PM +0100, Victor Lazzarini wrote:
> What do you mean?

In Mark's slides. 
The case for proper publication, documentation, open access and intellectual 
honesty.
The values that most of us old beards consider to be the foundation of _real_ 
science.

Or are you asking about the eclipse of such values currently blighting acedemia?
If so that's a big ole can-o-worms, and too OT for me to want to open here. :)

best
Andy




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] DAFx 2012 - Software Development for Audio and Music Researchers Tutorial

2012-10-20 Thread Andy Farnell

Great to see prof Mark Plumbley talking some sense to the train wreck of the 
present 
academic trajecory in those slides.



On Fri, Oct 19, 2012 at 05:52:09PM +0100, Luis Figueira wrote:
> Dear MUSIC-DSP list members, 
> 
> we'd like to let you know that the handouts, slides and other materials from 
> our Software Development for Audio and Music Researchers tutorial at this 
> year's edition of DAFx can now be found at: 
> 
> http://soundsoftware.ac.uk/videos#dafx12-slides
> 
> and
> 
> http://soundsoftware.ac.uk/handouts-guides
> 
> There is also some material from other workshops (and more to be added soon). 
> Hope this is of interest. 
> 
> Kind regards, 
> 
> Luis Figueira
> SoundSoftware.ac.uk
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] ARM for DSP

2012-09-25 Thread Andy Farnell


How do you develop code for these Eric? What is your toolchain?

best
Andy

On Tue, Sep 25, 2012 at 09:43:09AM -0700, Eric Brombaugh wrote:
> The STM32F4 series parts are cheap, fast, powerful for their class.
> Not really on par with the Cortex A8 and Atom machines but great for
> embedded.
> 
> I've got a little audio signal processing project based on the
> STM32F4 going now:
> 
> http://ebrombaugh.studionebula.com/synth/stm32f4_codec/index.html
> 
> I've been writing code on this for the last few weeks and have a few
> basic audio effects running on it. I've tried some frequency domain
> processing as well - 128-sample real floating point FFT/IFFT takes
> about 120us with the CPU running at max rated clock speed.
> 
> Definitely worth checking out if you've got modest DSP to do on a budget.
> 
> Eric
> 
> On 09/25/2012 09:28 AM, Nigel Redmon wrote:
> >I haven't had time to do much with it yet, but the STM32F4DISCOVERY board is 
> >bargain, with single precision floating point and DSP features (single cycle 
> >MAC, saturated arithmetic, SIMD), and a bunch of nice goodies on the baord, 
> >$14.55 at Mouser.
> >
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Aalborg openings

2012-07-17 Thread Andy Farnell
I'm posting this to one or two lists, please excuse
me if you've seen it already

Some relevant new academic positions for comp music and DSPers

Cheers,
Andy


FW:
---

Regards,

Mark


###
Music research at Aalborg University, with support from the Obel Family
Foundation, is seeking to fill three, two-year postdoctoral positions and
two or three, three-year PhD positions.  We are a small, but rapidly
expanding, research group active in the following areas of relevance to
these positions:  generative music, music/sound design and production,
biofeedback (emotioneering), sound semantics, and multi-modality.

Successful postdoctoral applicants will be expected to pursue a course of
study within the broad field of music/sound that is related to, but not
limited to, the areas listed above.  Applicants should have obtained their
PhD no earlier than 1st August 2007 and must demonstrate in their
application their willingness and ability to work within Aalborg
University's interdisciplinary environment.

PhD applicants will be expected to propose a thesis topic within the field
of music and sound production particularly as it relates to emotion,
biofeedback, or computer games and should be able to demonstrate, through
qualifications and/or publications, competency in the following:
music/sound design and production and at least one of computer
programming, psychology, or cognitive science.  The number of PhD
positions appointed depends upon external funding acquired for each
position and applicants are encouraged, but not required, to seek a
portion of funding from other sources such as industry.  Any such
provisional funding offers should be indicated in the application.

All applicants are expected to be at ease in both practical and
theoretical milieu.

PhD:  http://www.vacancies.aau.dk/show-vacancy/?vacancy=414924

PostDoc:  http://www.vacancies.aau.dk/show-vacancy/?vacancy=414154




--
Mark Grimshaw
Obel Professor of Music
TEL: (+45) 99 40 91 00
FAX: (+45) 98 15 45 94
Aalborg Universitet
Institut for Kommunikation
Kroghstræde 6, Lokale 19
9220 Aalborg Ø
Denmark


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] programming for electric pickups

2012-07-04 Thread Andy Farnell


All I can think is that hammered strings are a gift when it 
comes to transient detection, so you should be able to
do quite nice digital processing of dulcimer. Especially
with a multi channel 'pickup per sting'
cheers,
Andy

On Thu, Jul 05, 2012 at 01:07:52AM +0430, Sajjad Abdoli wrote:
> Dear Music-dsp list,
> 
> I have a question about electric pickups. Have you ever tried to
> beaten pickup(s) on a musical instrument
> (specifically, hammered dulcimer) and connect it to a computer and
> then write a computer program based on it? Is there any paper or
> open-source program available? what facilities are needed?
> 
> Thanks in advance,
> Sajjad Abdoli
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise gate for guitar amplifiers and hysteresis

2012-07-04 Thread Andy Farnell
Another take that comes to mind from servo/control electronics,
is that the ramp and the hysteresis don't have to be considered
as separate blocks. Instead of a 'hard' hysteresis like schmitt
trigger, followed by a linear ramp or LPF, make the rate of change
of gain proportional to some function above the threshold
and some other below it. IIRC the usual choice for this sort
of thing is a sigmoid function. 

cheers,
Andy

On Wed, Jul 04, 2012 at 08:56:28PM +0100, Rob Belcham wrote:
> Hi Ivan,
> 
> Another flavour of skew would be to add an additional attack &
> release envelope between the target gain calculation (G1 or G2) and
> VCA. This would typically be the envelope controls that you provide
> to the user, while the RMS detector envelope parameters are usually
> fixed. As the gate opens, the VCA ramps between open & closed gain
> over the attack time & when the gate closes, the VCA gain ramps down
> from the open gain to the closed gain.
> 
> Hope this helps
> Regards
> Rob
> 
> --
> From: "robert bristow-johnson" 
> Sent: Wednesday, July 04, 2012 5:44 PM
> To: 
> Subject: Re: [music-dsp] Noise gate for guitar amplifiers and hysteresis
> 
> >On 7/4/12 11:06 AM, Ivan Cohen wrote:
> >>Hello rbj !
> >>
> >>What do you mean by "slew" ? Is it a filtering applied on the
> >>VCA attenuation ?
> >
> >specifically, *low-pass* filtering.
> >
> >>
> >>I think the answer to your question is obviously no :) I may
> >>have missed a point in the implementation of noise gates. I
> >>would be glad if you can detail a little...
> >>
> >
> >sure, but it's the 4th of July here in the states.  i was about to
> >get on my bike.  i'll get back to this tonight.  okay?
> >
> >-- 
> >
> >r b-j  r...@audioimagination.com
> >
> >"Imagination is more important than knowledge."
> >
> >
> >
> >--
> >dupswapdrop -- the music-dsp mailing list and website:
> >subscription info, FAQ, source code archive, list archive, book
> >reviews, dsp links
> >http://music.columbia.edu/cmc/music-dsp
> >http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] _ Pointers for auto-classification of sounds?

2012-06-17 Thread Andy Farnell
On Thu, Jun 14, 2012 at 05:50:59PM +1000, Ross Bencina wrote:
> On 14/06/2012 5:29 PM, Andy Farnell wrote:
> >Maybe this isn't the same hazard of dimensionality that Dan
> >warns us of... I'm saying the space is definitely_warped_
> >with big areas of nothingness between apparently close
> >and similar points. Does that make sense?
> 
> Seems to me there are at least three difference spaces:
> 
> A- "environmental" timbre space (perhaps the space of physically
> realisable sounds? or the space of previously experienced stimuli)
> 
> B- "perceptual/cognitive" timbre space
> 
> C- synthesis parameter space(s)

Broadly, I'd agree exactly there Ross.

How about, breaking it down a little more, shall we:

The (A) that you mention, and you note its overlap with (B), is what
I think David Beck nails with the term "acoustically viable". That
set of physical parameters, masses, tensions, radiation areas,
kinetic energies and so on, objectively quantifiable hard properties
that can be set up to produce a sound in the real world.

Between A and B though there exists a "real signal domain" for want of
a better term. Here, "real" means analogue (continuous) pressure function
of time from some observation (audition) point(s). It is as the view
or photograph is to the actual scene. Captured by a microphone and
transformed digitally, we may still call this our signal.

At the next transformative boundary, into the cognitive, it becomes
an experience, eliciting feeling, sense, association, recognition.
Any attempt to name that must be in terms of, as you say, a "cognitive
parameter space". 

Then things get interesting I think.

There are two cognitive spaces I postulate, One is an analytical set,
and the other is a synthetic set. A declarative (what is) apprehension,
and an imperative (how to) formulation. The degree to which these
flow easily together is a measure of the skills/mastery/virtuosity
of the performer/instrumentalist/synthesist to turn (B) back into
a signal via some other route by dint of (C). (Here, "synthesis
parameter" can be the forces applied to a real musical device by motor
output, we do not have to limit it to the digital)

 
> Maybe B is a function of neural structures which are either
> genetically or experientially developed over past exposure to A.

As more of an empiricist I am inclined towards the latter.
 
> C is independent of A and B, thus it is not really surprising that
> the C->B mapping is warped. You would expect B to be more reflective
> of A (B being a kind of self-organising map of the parts of A that
> humans "care" about).

Indeed. Exactly one of the places I ended up in fact, using
Kahonen SEM to try and make the bridge across this warped space
to an aribtrary C-space. But recall the "Other B", let us call them
B1 and B2. Just as B1 conforms through experience to mirror A,
so does B2 converge on an approximation of C through use. 
We learn to use synthesisers, or to speak (vocal apparatus
as synthesier, why not). The learning task for the master of
an instrument is therefore to mentally connect B1 to B2.

I had an interesting thought, quite randomly the other day
about how an intelligent animal can probably do this for
any pair of parameter spaces. Parrots are non-oscines and
they have no tracheal muscles to speak of, yet they learn
to mimic human speech using, apparently, a form of FM.

cheers,

Andy





--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] _ Pointers for auto-classification of sounds?

2012-06-14 Thread Andy Farnell

> > Upshot: needle in a haystack

> i dunno about that.  at least for classifying isolated musical
> notes, you'll do better than needle in haystack by representing
> notes broken down into parameter "trajectories" (for lack of a
> better word) that include amplitude, pitch, and various
> timber-related parameters like inharmonicity, spectral centroid,
> spread, skew, etc.  that's six envelopes to match and you can choose
> to ignore some of them.  for single notes or tone, aren't these
> parameters that might be perceptually the more apparent?  in other
> words, if you had two notes that you had trouble matching the
> waveforms, if these parameter trajectories all matched pretty well,
> wouldn't you say that something about these notes sound similar?
> 
> 
> -- 
> 
> r b-j  r...@audioimagination.com

Absolutely yes Robert. The first work on this, the paper that really
got me interested was by Wessel and Gray. They took an approach of using
perceptual (objectively measurable yet common and sensible) axes like
attack and roughness. Timbre spaces, and then trajectories within such
spaces can prove useful for all of the tasks of (going from less to
more specific):

discovery
classification
identification/matching
recognistion

The haystack starts where you want to do more than get a 
indicator whether a sound is a certain kind (classify)
of thing towards a more specific one.

Actually the haystack is a bad analogy. Its more like
mining for rare metals in a great landscape. Needles in
haystacks might be thought of as evenly distributed.
Minerals, and sounds (perceptually) are clustery.

So, a whole bunch of things in signal space might 
sound the same. But then you find an area where the smallest
parameter changes everything.

Maybe this isn't the same hazard of dimensionality that Dan 
warns us of... I'm saying the space is definitely _warped_
with big areas of nothingness between apparently close
and similar points. Does that make sense? 



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-13 Thread Andy Farnell




I would second that. My research in the 1990s led to the same conclusion,
in essence the parametric space is vast while the perceptually useful space
is very small and sparsely dotted around in the param space.
Upshot: needle in a haystack

Andy


On Wed, Jun 13, 2012 at 10:13:03AM +0100, Dan Stowell wrote:
> On 11/06/2012 17:13, Charles Turner wrote:
> >Ross pretty much guessed my interest. Trying to see whether it's
> >possible to automate the exploration of the parameter space of a
> >synthesis algorithm. Imperative to something like that would be a
> >procedure to analyze large quantities of sound data. Hence timbre
> >classification. Of course, the process could be really simple:
> >eliminating the settings that produced zero output, or immediately
> >went into self-oscillation. But who knows where the line between "too
> >many to audition" and convincing timbre classification lies?
> 
> I haven't read all of this thread but let me give you a warning from
> experience: the Curse Of Dimensionality is a real killer for
> optimism in exploring large parameter spaces :(
> 
> In my PhD  I was automatically
> exploring large parameter spaces in order to make useful analogies
> between voice timbre and synth timbre. It's all too easy to fall
> into an optimism that genetic algorithms, or simulated annealing, or
> deep belief networks, or some other flavour of the month might get
> you round this one. But you can't. Either you restrict yourself to
> synthesis techniques whose output has a compact closed form (so you
> can do an exact deduction) (boring!), or to those which have a small
> number of parameters (boring!), or you limit yourself to some kind
> of random sample of what's possible. And from my experience, I
> personally don't believe that GAs get you anywhere more interesting
> than random sampling.
> 
> 
> 
> Best
> Dan
> -- 
> Dan Stowell
> Postdoctoral Research Assistant
> Centre for Digital Music
> Queen Mary, University of London
> Mile End Road, London E1 4NS
> http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm
> http://www.mcld.co.uk/
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] .wav file format conversion tools

2012-06-08 Thread Andy Farnell

You could write this in a jiffy with libsndfile or something right.

Andy


On Fri, Jun 08, 2012 at 01:22:59PM -0700, Linda Seltzer wrote:
> Does anyone know of a tool that converts a .wav file into wave extensible
> format with the PCM subtype rather than the IEEE floating point subtype?
> 
> Linda Seltzer
> lselt...@alumni.caltech.edu
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-08 Thread Andy Farnell

IIRC William Brent made some good "Timbre Stamp" externals for Pd 
based on Paul Brosiers libaubio and other decomposition and feature
extraction. Much of that does the things Robert mentions.
There are simple first order things like pitch and amplitude
Complex aggregate things like roughness/harmonicity spectral centroid...
Second order things like spectral flux

And finally gotta have some way to extract and recognise 
combinations of these, maybe a neural net based annealing/training
type setup or whatever, or distance based classifier.

Having played with a few of these kinds of things, you should
know the problem is hard and results are mixed. 
Nobody has yet showed me a system that you can play an unknown
sound to and have it named as a particular source, like a certain
farm animal or vehicle. You get _classification with a confidence_ 
rather than identification. And you need a big training set. Eventually,
with a big enough effects collection you get a good ability 
to say a new sound is _like_ some others. This can become
a very creative tool for sound design.

cheers,
Andy



On Fri, Jun 08, 2012 at 01:36:22PM -0400, Charles Turner wrote:
> Hi all-
> 
> I was initially hesitant to post to the list as I haven't explored
> this topic very deeply, but after a second thought I said "what the
> hell," so please forgive if my Friday mood is more lazy than
> inquisitive.
> 
> Here's my project: say I have a collective of sound files, all short
> and the same length, say 1 second in length. I want to classify them
> according to timbre via a single characteristic that I can then
> organize along one axis of a visual graph.
> 
> The files have these other properties:
> 
>   . Amplitude envelope. I don't need to classify by time
> characteristic, but samples could have different characteristics,
> ranging from complete silence, to a classical ADSR shape, to
> values pegged at either +-100% or 0% amplitude.
> 
>   . Timbre. Samples could range in timbre from noise to
> (hypothetically) a pure sine wave.
> 
> Any ideas on how to approach this? I've looked at a few papers on the
> subject, and their aims seem somewhat different and more elaborate
> than mine (instrument classification, etc). Also, I've started to play
> around with Emmanuel Jourdain's zsa.descriptors for Max/MSP, mostly
> because of the eazy-peazy environment. But what other technology
> (irrespective of language) might I profit from looking at?
> 
> Thanks so much for any interest!
> 
> Best,
> 
> Charles
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] ANN: Book: The Art of VA Filter Design

2012-05-26 Thread Andy Farnell

Thanks for this Vadim, it looks like an amazing compendium
and well structured for students to learn from.

On Fri, May 25, 2012 at 10:54:25AM +0200, Vadim Zavalishin wrote:
> Hi all
> 
> This is kind of a cross-announcement from KVRAudio, but since there
> are probably a number of different people on this list, I thought
> I'd announce it here as well. Get it here:
> 
> http://ay-kedi.narod2.ru/VAFilterDesign.pdf
> http://images-l3.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign.pdf
> http://www.discodsp.net/VAFilterDesign.pdf (thanks to "george" for
> mirroring)
> 
> There is a discussion thread at
> http://www.kvraudio.com/forum/viewtopic.php?t=350246
> 
> Regards,
> Vadim
> 
> -- 
> Vadim Zavalishin
> Software Integration Architect | R&D
> 
> Tel +49-30-611035-0
> Fax +49-30-611035-2600
> 
> NATIVE INSTRUMENTS GmbH
> Schlesische Str. 29-30
> 10997 Berlin, Germany
> http://www.native-instruments.com
> 
> Registergericht: Amtsgericht Charlottenburg
> Registernummer: HRB 72458
> UST.-ID.-Nr. DE 20 374 7747
> 
> Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-11 Thread Andy Farnell
Some great points of advice for any site there!
Would jump in but my PHP skills are rotten.
Possibly a job for a web student over summer? 


On Wed, Apr 11, 2012 at 01:50:55PM -0600, Roberta wrote:
> I'm a little late to the party but
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-28 Thread Andy Farnell
On Tue, Feb 28, 2012 at 03:16:14PM -0500, Adam Puckett wrote:
> Andy,
> 
> there's an opcode called active that does what you want 'numalloc' to do:
> 
> http://csounds.com/manual/html/active.html


Thanks Adam! That's a big help to some of my projects, I'll
take a look at it.

cheers,
Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] very cheap synthesis techniques

2012-02-28 Thread Andy Farnell
On Tue, Feb 28, 2012 at 12:00:54PM -0800, Nigel Redmon wrote:

> wiring CMOS buffer outputs to their own inputs (perhaps through a resistor

Using inverter gates is an old trick for making very cheap 
oscillators. With some cells arranged to give you 6v and a 
bit of wire attached to the output pin it will immediately 
start running at a few hundred megahertz, and work as a short 
lived tracking bug to stick to anything you want to be a beacon 
with loads of nasty harmonics that show up all over the dial.

Putting a capacitor and resistor makes it a more controllable
relaxation oscillator. Varying the supply voltage on CMOS
gates changes the switching threshold so you get a super
cheap VCO.

http://www.discovercircuits.com/DJ-Circuits/4584vco.htm


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-28 Thread Andy Farnell



Thanks John,

These are all very useful. Some of them, the last three, I have
used before to create timeouts, The others I should pay more
attention to.

Apparently not covered by these: 

I would like a mechanism, not sure whether this would be an opcode or
a general (global) language feature, that would keep track 
of how many instances of an instrument are allocated, so if one were
to call that numalloc


kcelloVoicesPlaying numalloc cello1

would give me an integer in kcelloVoicesPlaying

This is necseaary when you are using dynamic runtime
composition using scoreline and suchlike.

best

Andy


On Tue, Feb 28, 2012 at 01:21:05PM -, j...@cs.bath.ac.uk wrote:
> 
> >
> > For the beginner and experienced user alike, a rough spot in Csound is
> > resource management. Perhaps because Csound comes from an offline
> > lineage and was not conceived as a real-time system it's very easy
> > to create monsterous things. Things that unexpectedly use all the CPU
> > and drop out. Things that unexpectedly eat memory very fast.
> >
> > Of course, to paraphrase Stroustrup,  you can shoot yourself in the foot
> > with all audio DSP languages, but with Csound you absolutely
> > positively kill every last mo___ker in the room.
> >
> > Better polyphony and CPU monitoring/management, with some kind resource
> > limited subprocess concept would make it much safer.
> >
> 
> cpumeter — Reports the usage of cpu either total or per core.
> 
> maxalloc — Limits the number of allocations of an instrument.
> 
> prealloc — Creates space for instruments but does not run them.
> 
> clockon — Starts one of a number of internal clocks.
> clockoff — Stops one of a number of internal clocks.
> readclock — Reads the value of an internal clock.
> 
> If you have clear suggestions as what you want/need let us know and we
> will investigate -- just like all suggestions.
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-28 Thread Andy Farnell
On Tue, Feb 28, 2012 at 11:04:45AM +, Richard Dobson wrote:

> So, one way and another, "Computer music" is so laden with
> definitions and qualifications as to have lost all definition -
> using it gives the listener no real information.

And yet, if I were to say to you about such and such a piece
of music "Oh, it's computer music", you would know exactly what
I mean, and what to expect.

It has a certain cultural centroid, a locus amongst groups
of people, historical events, practices, icons. There are
certain expectations of form, performance, milieu.

You will sit in a room with a high quality sound system of
no less than four loudspeakers, many of the audience will sport
impressive beards, and you will nod your head through excruciating
expanses of silence and high frequency buzzing. At times you
will feel quite dizzy. And mistake your own tinnitus for the finale.
At the end there will be a presentation of some quite opaque equations,
and afterwards cheese, wine and cordial conversation as we
roam Paris or Vienna devouring haute cuisine paid for by some 
impossibly selective European grant money, and all reminisce over 
VCS3s and our first homebuild oscillators using germainium transistors 
and bitch about how crap everything digital really is.

That's computer music.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-28 Thread Andy Farnell
> On 28/02/2012 00:43, Michael Gogins wrote:
> ..
> >
> >What I would dearly love to hear in this discussion is how Csound can
> >be improved to facilitate the creation of music, from people who do
> >use software to compose and create their music. Or how some other
> >software might be better, for that matter.


For the beginner and experienced user alike, a rough spot in Csound is
resource management. Perhaps because Csound comes from an offline
lineage and was not conceived as a real-time system it's very easy
to create monsterous things. Things that unexpectedly use all the CPU
and drop out. Things that unexpectedly eat memory very fast.

Of course, to paraphrase Stroustrup,  you can shoot yourself in the foot
with all audio DSP languages, but with Csound you absolutely
positively kill every last mo___ker in the room.

Better polyphony and CPU monitoring/management, with some kind resource
limited subprocess concept would make it much safer.

Just 2c


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Scheduling (rate vs event) was: a little about myself

2012-02-27 Thread Andy Farnell
Hi Ross

> Hi Andy,
> Some comments, and questions for clarification...

> > There is_always_  an audio signal but there are sometimes no control 
> > messages
> >
> > Control messages are computed on a block boundary

> Given this formulation, that the control message scheduler is only 
> pumped on the block boundary, isn't this equivalent to having a control 
> rate available? 

It may be the case that there are no control messages queued.
What I hoped to highlight here is that there isn't necessarily
a control cycle on which something _must/always_ happen.
On the next cycle, a whole burst of stuff may happen that got
queued during the previos 1/44100th second.

> Presumably there is a way to get a bang (evaluation) 
> every block. 

Yes, there is actually a [bang~] object, which I think
outputs a message bang immediately after the previous audio
block has been completed. I rarely use it myself.


> Also note that at least in max, the scheduler doesn't always run 
> block-synchronously with the audio thread. I think there are different 
> scheduling modes. See "Scheduler in Audio Interrupt (SAI)" here for example:
> http://cycling74.com/2004/09/09/event-priority-in-max-scheduler-vs-queue/

Very interesting. I ought to reread the Pd source more closely before
attempting to comment further, I suspect this is is a subtle but significant
difference. AFAIK there is only one scheduling mode in Pd and it is closer
to the referenced SAI, but with a queue of time tagged control messages.

> I'm not sure I understand what you mean by "Ks to intervene_anywhere_ within 
> the stream" -- presumably (in the model you described) they only intervene at 
> block boundaries? 

Exactly that.

> Right, so for example, you don't implicitly have a "control sample rate" 
> for doing control rate filtering etc.

Timing can be very accurately set (sub millisecond now I think) with a [metro]
and all calculations work for message filtering, but yes, strictly you are 
right,
the notion of message "samplerate" is meaningless in Pd.


PS I am enjoying the recent threads immensely. Having used Eventides in the 
studio
for countless hundreds of hours its amazing to read Robert's revelation and
know what was behind all those knob twiddles. And the little audio VM that 
David 
Olofson posted the other day is still amusing me.

best
Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-26 Thread Andy Farnell
On Sun, Feb 26, 2012 at 04:01:52PM +, Richard Dobson wrote:

 
> For my sonification work for LHCsound, I used Perl to parse data
> files and generate Csound scores, simply because it is a task Perl
> is canonically optimised to do and scripts can be run up very
> quickly. 

Just a quick +1 for Perl as an event generator in cooperation
with Csound, especially if the project involves any kind of
network processing.

Andy


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-26 Thread Andy Farnell

Of course Brad, and maybe my trying to formalise it myself some people will
jump in with helpful corrections, because it _is_ a bit tricky to understand
and took me a long time and maybe still have some misconceptions. Anyhoo I 
think it's very helpful to understand the differences in these designs
and by implication other ways of considering music signal programming.


IIRC, most Music-N line of systems are multi-rate. That means we have a 
fast computation rate, on which audio signals are calculated, and a slower
rate (obviously some integer factor of the audio rate), usually called
the control rate, at which things like slow moving envelopes, MIDI inputs
and suchlike are calculated.

Of course the slow rates can be interpolated, and the fast rates decimated
to be compatible, to smooth envelopes, or extract signal features into
the control rate, but the facts remain that:

Two rates always exist

On each step some audio signal samples (As) are calculated

Every ksamps step (the ratio of audio to control steps) some control
signals Ks are calculated

They are effectively interleved not threaded, so all Ks must complete
before the next burst of As can, and vice versa...



Contrast this with Dataflow (* general dataflow has a wider more formal 
interpretation so I am really talking about Miller Puckette's music DSP 
environments here)

In dataflow there is no control rate or signal rate. It is better to think of 
them
as "control domain" and "signal domain". They are based on a _pull_ (or demand)
and _push_ (or availability) driven idea. So the calls to pull audio blocks 
come 
synchronously from the soundcard (interrupt + callback model), and calls are 
passed 
further up the chain of audio type functions until they reach a terminal node
that is either an oscillator or constant source or somesuch. Meanwhile control
data is effectively "pushed" into the system and causes a chain of messages
to propagate (the exact evaluation is more complex, but this illustrates
the central idea). The essential facts of this dataflow might be

There is _always_ an audio signal but there are sometimes no control messages

Control messages are computed on a block boundary

The audio signal is hard real time, so failure of the control to complete
before the next interrupt will result in a drop out.

  --


Okay, given these two approaches we can see that setting kr = ar 
alternatively expressed as (ksamps = 1) in the former system seems to
be the same as setting the blocksize to 1 in the latter.

BUT:

In Csound we will still get an interleved series

[As_0, Ks_0, As_1, Ks_1, As_2, Ks_2 ... As_n, Ks_n]

wheras in Miller's dataflow we have As, the audio stream running
constantly, with the possibility for Ks to intervene _anywhere_
within the stream so long as they can complete before the next
As is demanded

[As_0, As_1, As_2, As_3 ...As_20, As_21, Ks_0, As_22 ...As_100, Ks_1, Ks_2, 
As_101]

Obviously this requires one to think about difficult problems where
control and signals are closely coupled in subtly different ways for
each language.

best,
Andy




On Sun, Feb 26, 2012 at 09:47:45AM -0500, Brad Garton wrote:
> On Feb 26, 2012, at 9:38 AM, Andy Farnell wrote:
> 
> > For a long time, there has been the equivalent trick to that which Michael 
> > mentions for Csound, setting ksamps = 1, only in Pd/Max you set blocksize 
> > =1.
> > While these seem superficially similar, and both have the same outcome that
> > you can do control calculations on a per sample basis, they are conceptually
> > different, in the same sense that multi-rate and "pull" dataflow differ.
> 
> Andy -- could you unpack this a little for me?  Are you referring to the 
> independence
> between the 'control rate' and the underlying blocksize that you get in 
> Csound (and
> others)?  if so, what are the some of the "real" effects that this might have?
> 
> I'm not being pejorative with these questions, I honestly don't understand 
> what you mean.
> 
> brad
> http://music.columbia.edu/~brad
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] guitar physical model

2012-02-26 Thread Andy Farnell

Me too, Some great Steve Hillage like moments in that.

On Sun, Feb 26, 2012 at 10:52:12AM +0100, Emanuel Landeholm wrote:
> >        http://music.columbia.edu/~brad/music/mp3/Rough_Raga_Riffs.mp3
> 
> This. I just listened to it and it put me in a good mood!
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-26 Thread Andy Farnell
On Sun, Feb 26, 2012 at 09:11:10AM -0500, Brad Garton wrote:
> On Feb 26, 2012, at 5:13 AM, Richard Dobson wrote:
> 
> > It is rather more flexible than Max/MSP, say, because you can if you want
> > to run a single-sample vector, whereas MSP has always been fixed to a 
> > 64-sample block size.
> 
> I don't think that's actually true...
> 
> We're fooling around with the new Max/MSP gen~ stuff in class, it seems an 
> interesting alternative model for low-level DSP coding.  Once they figure out 
> how to do proper conditionals it will be really powerful.
> 
> brad


For a long time, there has been the equivalent trick to that which Michael 
mentions for Csound, setting ksamps = 1, only in Pd/Max you set blocksize =1.
While these seem superficially similar, and both have the same outcome that
you can do control calculations on a per sample basis, they are conceptually
different, in the same sense that multi-rate and "pull" dataflow differ.

cheers,
Andy


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Boulez

2012-02-25 Thread Andy Farnell
Hi Charles,

> The book collects his Darmstadt lectures from 1954-56

Yes, that era fits better with the ideas conveyed. thanks
for the clear up.

> Isn't the point not to take sides, but recognize the tension? 

Definitely. You are absolutely right. That was where I was headed
with my remarks.


> Could very well be that the callback is the result of a cultural 
> outlook, and not the result of engineering design…

Embellishing my reply to Adam, it's not so much engineering design
as engineering circumstance. We find ourselves writing code on
general purpose desktop computers with multi-tasking operating
systems and a quite unpredictable kernel schedule. Starting
from a certain necessity the rest of the design seems to follow
quite naturally. 

cheers,
Andy

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Boulez

2012-02-25 Thread Andy Farnell


Hopefully I am not misunderstanding here. I think the big influence
on the low level design of music software, viz callbacks &c is that
the modern desktop and operating system is such a fiendishly complicated
system callback (interrupt led) buffering is essential to mediate the producer
consumer problem in this unpredictable runtime.

With simpler hardware, and total control over it (as for FPGA),
radically different things are possible. I understand why some 
here like that approach and prefer the idea of a synth or 
processor being a dedicated _device_.

best
Andy



On Sat, Feb 25, 2012 at 09:40:59AM -0500, Adam Puckett wrote:
> Would it be possible to design a callback that dynamically filled the
> buffer as it was being called, or if the buffer didn't exist, create
> it and put one sample in it? that way there wouldn't be any "dropped
> calls" in the process. Or am I missing something?
> 
> On 2/25/12, Charles Turner  wrote:
> > On Feb 25, 2012, at 6:34 AM, Andy Farnell wrote:
> >
> >> And whereas I do agree with Pierre Boulez here, maybe it
> >> is misguided to turn to reductionism and simplicity for
> >> their own sake. It may be equally hopeless to embark
> >> on a quest for authenticity this way.
> >
> > Hi Andy-
> >
> > I should apologize for hastily listing the publication date of the book. The
> > book collects his Darmstadt lectures from 1954-56, so it comes from a much
> > earlier time. I don't think Boulez would have changed his mind on things
> > though. Sounds like you come from a much more "Schaefferian" era.
> >
> > Isn't the point not to take sides, but recognize the tension? Cultures that
> > are busily exploring harmonic relations, haven't simultaneously plunged deep
> > into the world of rhythm. Music is just too big a subject, and some of its
> > properties exist in a dialectical relation to others. Although we all enjoy
> > a sweet dessert, we don't put sugar in everything. (Unless you're the Nestle
> > company!)
> >
> > My point was that the checkpoint raised by callbacks feeding a sample buffer
> > may come from resistances outside the technical world. Boulez sees timbre as
> > the enemy of harmony. Could very well be that the callback is the result of
> > a cultural outlook, and not the result of engineering design…
> >
> > Best, Charles
> >
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, dsp
> > links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] google's non-sine

2012-02-25 Thread Andy Farnell
On Sat, Feb 25, 2012 at 10:58:52AM +, Richard Dobson wrote:
> On 25/02/2012 09:40, Andy Farnell wrote:
> ..

> And the harsh truth is that no competing search engine will succeeed
> unless its name works well as a verb.

Astute. Maybe the duck is doomed already. Unless it becomes normative
that doing some research is "Ducking the question". Yes degooglize
is much better, particularly with the American z.
best
Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-25 Thread Andy Farnell
On Fri, Feb 24, 2012 at 01:21:05PM -0500, Charles Turner wrote:
> On Feb 24, 2012, at 1:05 PM, Andy Farnell wrote:
> 
> > Some really interesting thoughts here Ross. At what level of
> > granularity does the trade-off of control, flexibility and
> > efficiency reach its sweet spot?
> 
> “I understand that the dialectic of composition better contents itself 
> with neutral objects, not easily identifiable ones, like pure tones or 
> simple tone aggregates, having no inner profile of dynamics, duration 
> or timbre. As soon as one shapes elaborated figures, assembles them 
> into ‘formed’ complexes, and uses them as first-order objects for 
> composition, one is not to forget [...] that they have lost all 
> neutrality and acquired a personality, an individuality which makes 
> them quite unfit for a generalized dialectics of sonic relations.”
> 
> Pierre Boulez: p.45, _Penser la musique aujourd’hui_, (1994)
> 


For me it's strange to see that written in 1994, close to the time 
I was behind the lines of British pop culture, where the precise 
opposite was true. Superstar DJs, music media moguls and influential 
producers romped and rollicked in an entirely sample based, second 
order culture. The symbols and currency of composition were drum 
loops, pre-made chord sounds or "stabs and hits", a cappella vocal 
hooks. 

Of course that had a profound influence on my own music making,
my approach and understanding of what composition is. Pop is
precisely about rearranging second order symbols.

But as a computer guy, I also noticed how culture influenced
the software, how marketing and the values of those deemed
successful manipulated the tools, and the tools in turn manipulated
the possibilities of new music makers. To this day I love to
show my students this funny ad as a warning...

http://obiwannabe.co.uk/temp/software.jpg

And whereas I do agree with Pierre Boulez here, maybe it
is misguided to turn to reductionism and simplicity for
their own sake. It may be equally hopeless to embark
on a quest for authenticity this way. 

Composition is just very hard work, time consuming and 
needs diverse human capacities like poetic skills. It's 
fundamentally at odds with the values of "making things easy". 
The danger then, in a world where products dominate principles, 
is to fall into the trap of deliberately making things difficult
for yourself.

To quote a cohort, Chris McCormick

"Making boring techno music is really easy with modern tools, but 
with live coding, boring techno is much harder."

http://news.bbc.co.uk/1/hi/technology/8244003.stm

The actual sweet spot is surely different for each person.
But you certainly will not find it if you either start with a
very strong idea of how music must be made, or are constrained
by tools that impose other peoples strong ideas upon you.

best,
Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] google's non-sine

2012-02-25 Thread Andy Farnell

When the lovely people at MIT added some extra cool graphics to my book cover
I was initially dismayed to see the usual "funky oscilloscope trace" with a 
blue tint, looking like an electric spark. But everyone I showed it to, my 
friends and family all thought it was amazing and futuristic! So I quickly 
got over my pedantry and embraced a new found ability to create signals that
go backwards in time as well as forwards. Graphic designers create their 
worlds, we create ours.

On the subject of creating worlds, I've missed this conversation entirely
because of a courageous attempt to degooglify my life and get out of the 
scrutinised bubble. Since switching to the Duck search engine I discovered
a whole internet out there. Maybe takes longer to find exact thing, but if 
you're that desperate to save half a second here and there your life is in 
trouble anyway, and the plus side is rediscovering the colourful, trashy 
landscape not interpreted through an individual bourgeois materialist lens. 
Turns out DSP stands for many different things, so by making more focussed 
searches and pausing for a moment to think instead of lean on the mental 
crutch, things actually work out better. On the weird side, LaTeX is more 
than a document processor. There's even a perfume called Philosophy.


On Fri, Feb 24, 2012 at 09:06:59PM +, Tom Wiltshire wrote:
> I agree as well. Why should it have to be a sine wave? Hertz didn't invent
> the sine wave! A square wave has 'frequency' just as much as a sine does, 
> and presumably 'frequency' was the point of the googledoodle. Put the odd 
> harmonics in and get a circular waveform, it's fine by me.
> 
> The amplitude and frequency modulation is a bit weird though!
> 
> T.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-24 Thread Andy Farnell

> The problem with "plug unit generators languages" for me is that they
> privilege the process (network of unit generators) over the content

Some really interesting thoughts here Ross. At what level of
granularity does the trade-off of control, flexibility and
efficiency reach its sweet spot?

In some ways the unit generator or patchable code-block
model is to be considered a compromise between the overhead
of calling functions on single samples and being able
to process chunks. It comes bottom up, out of implementation needs
rather than being a top down shorthand. On the other hand,
because familiar structures like the filter, oscillator and so forth
make sense as basic units of design the VM + Ugen model makes
a lot of sense to practitioners coming from the studio.

Plenty of analogous structures in general computer science
have similar rationales, like pipelines, SIMD, with the 
question being at what level of granularity can you lump a 
bunch of stuff together and process it all without sacrificing 
flexibility? Even apparently atomic instructions are, from the
microprocessors point of view, collections of more atomic 
register operations that we never consider unless programming 
in machine code. 

> Anything else is just plugging unit generators together, which is
> limiting in many situations (one reason I abandoned these kind of
> environments and started writing my algorithms in C++).

As linguists and writers note (Wittgenstein, Orwell, Ayer, Chomsky etc)
language defines the modes of thought and facilitates or limits what
we can do more or less easily. I guess plenty of studies have been
done of the "expressibility" of computer languages, since they are
strictly formal and amenable to analysis. Though we tend to invoke
"Turing completeness" and assume all roads lead to Rome, clearly some
languages are much better for certain things than others. 

Grist for the mill in computing philosophy, but as musicians or
sound designers it takes on a freshness. For example, the ease with
which polyphony can be conceived in Supercollider and Chuck is 
amazing compared to Pure Data/Max, which makes it an awkward hack 
at the best of times. Csound is somewhere between. And of course, 
though Csound is clearly conceived as a _composers_ language where 
large scale structures are easy to build, abstraction is very obtuse.

I remember Gunter Geiger's thesis being a good comparative
treatment of different computer music languages, but that was
mainly from a computational rather than expressibility angle.
Maybe there's a good doctoral project for someone lurking in this
question. 


> Programming in C++ makes the signal efficiently accessible.

Getting down to the metal with C/++ is more than just a departure from
the VM plus UGEN  model, it allows, as you say, complete reconfiguration
of the signal processing structure on a sample by sample basis, and
departures from strictly causal models using look ahead computation
etcetera. But at the same time it lays the signal bare, it
seems to bury the larger process (unless you are an extremely methodical
hacker and already working with quite robust and well used libraries).

Is there is a fundamental trade off here that we just cannot get around?


best
Andy




On Fri, Feb 24, 2012 at 07:25:29PM +1100, Ross Bencina wrote:
> Hi Brad,
> 
> On 24/02/2012 3:01 PM, Brad Garton wrote:
> >Joining this conversation a little late, but what the heck...
> 
> Me too...
> 
> >On Feb 22, 2012, at 9:18 AM, Michael Gogins wrote:
> >
> >>I got my start in computer music in 1986 or 1987 at the woof group at
> >>Columbia University using cmix on a Sun workstation.
> >
> >Michael was a stalwart back in those wild Ancient Days!
> >
> >>cmix has never
> >>had a runtime synthesis language; even now instrument code has to be
> >>written in C++.
> >
> >One possible misconception -- by "runtime synthesis language" I'm sure 
> >Michael
> >means a design language for instantiating synthesis/DSP algorithms *in real 
> >time*
> >as the language/synth-engine is running.  I tend to think of languages like 
> >ChucK
> >or Supercollider more in that sense than Csound, and even SC differentiates 
> >between
> >the language and then sending the synth-code to the server.
> 
> My reading would be that Michael may be implying that there is a
> difference between interpretation and compilation.
> 
> CSound does not have a runtime synthesis language either. It's a
> compiler with a VM. There is no way to re-write the code while it's
> running.
> 
> SC3 is very limited in this regard too (you can restructure the
> synth graph but there's no way to edit a synthdef except by
> replacing it, and there's no language code running sample
> synchronously in the server). So you have a kind of runtime
> compilation model.
> 
> I didn't get much of a chance to play with SC1 but my understanding
> is that you could actually process samples in the synthesis loop
> (like you can with cmix). To me this is real runtime syn

Re: [music-dsp] a little about myself

2012-02-22 Thread Andy Farnell
On Wed, Feb 22, 2012 at 09:18:11AM -0500, Michael Gogins wrote:

> I am writing an article about composing in C++ with the Csound API and
> CsoundAC, and I will try to get it published in the Csound Journal or
> elsewhere.

Definitely looking forward to that Michael.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-22 Thread Andy Farnell
Speed of development is an issue, as turning ideas into sound uses 
considerable human cognition, echoic memory to listen, serial and 
linguistics faculties to interpret, and then geometric mathematical 
and procedural acrobatics to adapt the internal model. C gives great
flexibility and control, but requires such intense working that an 
idea is often lost before it can be implemented. Compactness is thus 
a desirable quality for music making (with large structures) rather 
than sound programming. For seeing programmers, visual signal flows 
like Pure Data are powerful because the algorithm is maintained as a 
compact diagram. I can understand why Csound is attractive to someone 
without sight, and I wonder if you have also explored Supercollider, 
Chuck and Nyquist, which all represent quite different language 
interfaces to sound making. But as you are a programmer I also wonder, 
if for you Adam, Faust might be something important to explore. For 
one who interprets the world more in symbolic structures it's compactness 
might be something you find very useful. 




On Wed, Feb 22, 2012 at 08:44:56AM -0500, Adam Puckett wrote:
> It's nice to see some familiar names in Csound's defense.
> 
> Here's something I've considered since learning C: has anyone
> (attempted to) compose music in straight C (or C++) just using the
> audio APIs? I think that would be quite a challenge. I can see quite a
> bit more algorithmic potential there than probably any of the DSLs
> written in it.
> 
> On 2/21/12, Michael Gogins  wrote:
> > It's very easy to use Csound to solve idle mind puzzles! I think many
> > of us, certainly myself, find ourselves becoming distracted by the
> > technical work involved in making computer music, as opposed to the
> > superficially easier but in reality far more difficult work of
> > composing.
> >
> > Regards,
> > Mike
> >
> > On Tue, Feb 21, 2012 at 7:53 PM, Emanuel Landeholm
> >  wrote:
> >> Well. I need to start using csound. To actually do things in the real
> >> world instead of just solving idle mind puzzles.
> >>
> >> On Tue, Feb 21, 2012 at 10:02 PM, Victor  wrote:
> >>> i have been running csound in realtime since about 1998, which makes it
> >>> what? about fourteen years, however i remember seeing code for RT audio
> >>> in the version i picked up from cecelia.media.mit.edu back in 94. So,
> >>> strictly this capability has been there for the best part of twenty
> >>> years.
> >>>
> >> --
> >> dupswapdrop -- the music-dsp mailing list and website:
> >> subscription info, FAQ, source code archive, list archive, book reviews,
> >> dsp links
> >> http://music.columbia.edu/cmc/music-dsp
> >> http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> >
> >
> > --
> > Michael Gogins
> > Irreducible Productions
> > http://www.michael-gogins.com
> > Michael dot Gogins at gmail dot com
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, dsp
> > links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] PhD thesis on musical instrument sound morphing

2011-12-15 Thread Andy Farnell



On Thu, 15 Dec 2011 13:03:06 +0100
"Dr. Gunnar Eisenberg"  wrote:

> In recording studios you often hear requests like "turn this sound 
> more into an oboe" or "turn that sound's loudness evolution more 
> into a piano-like evolution" especially from musicians.

Exactly. This was the inspiration of my early research in the 1990s, 
a study of this language and action relating synthesis to human 
interaction (HCI), and leading to a model of interpolating high 
dimensional timbre spaces from self organising maps. 
Sound morphing is still a most interesting subject in my opinion, 
and open to myriad implementations and interpretations, both signal 
and perceptual. Keep up the excellent work Marcelo. I haven't had time 
to read your thesis, I have a stack of 20 that I *must* read, which 
rather takes the shine off diving into yours :) For now.

Theo; as computer/DSP guys (scientists) and musical composers 
(artists) it seems the church of computational sound has a wider 
roof than most. Somewhere in this stack of work I have here, 
patches, applications, electro-acoustic pieces, experiments, scores,
each personal journey unique... there will be at least two or three 
that bore me shitless. The reason I would never say so to a student 
is not a question of candour, students claim to relish my honesty, 
but rather it's a feeling of embarrassment that it reveals something
lacking in me where I fail to find the same passion as that student. 
I know that you share a passion for high standards and rigour. 
A high appreciation of value differences would only make that
sweeter. 


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Recording very long sound files

2011-10-29 Thread Andy Farnell
On Sat, 29 Oct 2011 09:16:24 +0300 (EEST)
Sampo Syreeni  wrote:




> Quite naturally even NTPv5 falls short of that. Which is why I referred 
> to GPS time+NTP, as your time reference. In a production environment a 
> GPS-synched master clock plus IEEE's version 2 of "Standard for a 
> Precision Clock Synchronization Protocol for Networked Measurement and 
> Control Systems" (IEEE 1588-2008) might finally get you into the realm 
> of realm of substantially sub-frame clock synchro over at least an 
> extended LAN. ;)


Very interesting, gives me lots of questions.

I understand that the GPS is used to pull in the NTP server, and that
is essentially a "private" NTP server, but it can be combined with
other NTP servers to get an ever better UTC (regardless of local
crystal clock accuracy given long enough window). I've had to
play around with these ideas for broadcast sync and booking systems
before.

But what exactly _is_ GPS time? How does that work? What is
its relation to the Greenwitch clock. Do we still take
Greenwitch as UTC = 0.00, or is UTC like a "democratic" globally
agreed time?

best
andy






> -- 
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Recording very long sound files

2011-10-28 Thread Andy Farnell

Don't worry Sampo, borrow your neighbors cesium clock.
If you need an atomic clock (GPS gives you geometric time, 
which may be a better reference to "the time" 

http://www.youtube.com/watch?v=5M5eG-aywZQ

but not necesarrily such an accurate clock.) try the NPL/MSF
which you should pick up fine in northern Europe.

http://en.wikipedia.org/wiki/Time_from_NPL
http://www.creative-science.org.uk/MSF4.html
http://www.worldtimesolutions.com/products/msf_radio_time_receiver.html





On Sat, 29 Oct 2011 07:43:38 +0300 (EEST)
Sampo Syreeni  wrote:

> On 2011-09-14, Richard Dobson wrote:
> 
> >> Our problem is that the number of samples differs significantly 
> >> between the recordings collected on different days (the largest 
> >> diference is up to 5 minutes!).
> 
> Thus, your clock experiences drift. The cheapest way to get proper clock 
> synch is to buy a GPS receiver add-on to your primary computer, and then 
> put up an NTP daemon which synchs with that. Well-done, that can buy you 
> driftless time in the micro-second range with a modified *nix-kernel, 
> and even under unmodified Windows XP and so on, at the half a 
> millisecond range.
> 
> (Apparently cesium references are classified as particle accelerators, 
> around here. I'm not allowed to possess one, as an individual. ;)
> -- 
> Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
> +358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Multichannel outputs (Audacity limitation)

2011-09-17 Thread Andy Farnell

Sorry, I didn't read carefully the Windows
requirement.

On Fri, 16 Sep 2011 22:20:14 +0100
Andy Farnell  wrote:

> Ardour
> 
> http://ardour.org/
> 
> a.
> 
> 
> 
> On Fri, 16 Sep 2011 09:26:21 -0500
> Al Clark  wrote:
> 
> > We are working on a USB interface that supports multiple 
> > output channels in a Windows machine.
> > 
> > It appears that Audacity can only output 2 channels. Does 
> > anyone have a recommendation for something similar that 
> > supports multiple output channels?
> > 
> > Thanks
> > 
> > Al Clark
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, 
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> 
> -- 
> Andy Farnell 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Multichannel outputs (Audacity limitation)

2011-09-16 Thread Andy Farnell
Ardour

http://ardour.org/

a.



On Fri, 16 Sep 2011 09:26:21 -0500
Al Clark  wrote:

> We are working on a USB interface that supports multiple 
> output channels in a Windows machine.
> 
> It appears that Audacity can only output 2 channels. Does 
> anyone have a recommendation for something similar that 
> supports multiple output channels?
> 
> Thanks
> 
> Al Clark
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-15 Thread Andy Farnell
On Thu, 15 Sep 2011 16:05:52 +0100
Gwenhwyfaer  wrote:

> A few points:
> 
> On 12/09/2011, Andy Farnell  wrote:
> >
> > If you are heading towards DX7 style FM then notice
> > that only two of the oscillators (2 and 6) can have
> > feedback, and that this is self feedback.
> 
> Not so - for example, in algorithm 4, operator 4 feeds back to op 6.

So it does. And now I have the whole list here, so does alg 6 which
feeds back 5 and 6. I was using a tattered old copy of Bristow and 
Chowning here, and just noticed they don't actually have all the 
algorithms in here. In fact, they use only a small subset for the 
whole text. It seems algs 1 and 2 are used for many of the examples.

a.

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Andy Farnell

The old presets discussion.. . wow! :)

We've been travelling this curve since before the Boss CompuRhythm
with it's sparkling choice of four different kinds of Rock (as
well as two Disco and Waltz!).

There are economic and cultural reasons for "preset mentality". 

Pretty sure it's been explored and challeneged before that synths
are getting "less programmable and creative"

It might seem to be increasing, but I reckon a good study 
would show that because the music technology market has 
expanded massively so have the number of "accessible" products.
The proportion of users wanting low level programability has
likely remained small but constant.



On Wed, 14 Sep 2011 12:32:14 +0200
"Didier Dambrin"  wrote:

> The evolution seems to be going towards some kind of middleware guys, 
> in-between musicians & engineers, who program presets for engines put in 
> black box "instruments", so that the musican can get access to non-sampled 
> FM presets, while not having access to FM at all.
> Well, seems to be what NI is doing as well.
> 
> It already happened with samplers, in the past the musician got a sampler, 
> that he could either program himself or feed with soundbanks. These days a 
> musician buys a rompler, & doesn't have access to the editor (which 
> obviously has to exist for the middleware guy who made the soundbank).
> 
> 
> 
> 
> >I don't think musicians mind tweaking and poking presets (what's the worst 
> >that can happen?), so it's really up to the makers to provide plenty of 
> >decent preset sounds. NI FM8 for example has a pretty good selection of 
> >presets and is a popular FM synth these days, I rate it pretty highly 
> >myself.
> >
> >
> > -Original Message-
> > From: music-dsp-boun...@music.columbia.edu 
> > [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
> > Sent: 14 September 2011 11:19
> > To: A discussion list for music-related DSP
> > Subject: Re: [music-dsp] FM Synthesis
> >
> > I believe FM sounds got back in fashion at the time everyone had forgotten
> > that shitty GM FM bank in Windows, when FM meant cheesy MIDI files.
> > ..but the problem with FM is that no one can program presets, it's very
> > unpredictable when you deal with more than 3 operators, and I'm not sure
> > today's musicians wanna deal with such complexity anymore. Actually, if 
> > the
> > story that "most DX7s never got their patches tweaked" is true, maybe no
> > musician ever wanted to deal with such complexity.
> >
> >
> >
> >> On 13/09/2011 21:06, Theo Verelst wrote:
> >>> Hi
> >>>
> >> ..
> >>> Remember that Frequency Modulation of only two operators already has
> >>> theoretically (without counting sampling artifacts!) a Bessel-function
> >>> guided spectrum which is in every case infinite, although at low
> >>> modulation indexes the higher components are still small. Also think
> >>> about the phase accuracy: single precision numbers are not good at
> >>> counting more samples than a few seconds for instance.
> >>>
> >>
> >> Not too much of a problem if you use table lookup, which is what I
> >> assume the DX 7 did. Phase errors are a problem in single precision if
> >> you compute and accumulate.
> >>
> >>
> >> ..
> >>> Oh, there are Open Source FM synths maybe worth looking at: a csound
> >>> "script" (or what that is called there)
> >>
> >>  Csound has the "foscil" two-operator self-contained opcode, and of
> >> course you can roll your own operator structures ad lib. Somewhere there
> >> is a full DX 7 emulation complete with patches (poss in the Csound book;
> >> not to hand right now).
> >>
> >> Have we now reached the point where FM sounds are back in fashion?
> >>
> >> Richard Dobson
> >>
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, 
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, 
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> 
> 
> 
> 
> 
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 9.0.914 / Virus Database: 271.1.1/3895 - Release Date: 09/13/11 
> 20:35:00
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Recording very long sound files

2011-09-14 Thread Andy Farnell


You can compute that without listening.
In the simplest case play a low frequency sine
into the stream and use a differentiator in 
a batch program to look for any jumps in the output.


On Wed, 14 Sep 2011 13:41:28 +0200 (CEST)
Laszlo Toth  wrote:

> > Do you actually find discontinuities in the recording?
> 
> We want to avoid listening to dozens of hours of recordings. Also, the

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Andy Farnell

On Wed, 14 Sep 2011 12:38:41 +0200
Emanuel Landeholm  wrote:

> How did Yamaha did with aliasing? They didn't. Their oscillators were anaolg.

They were not software synthesisers. Yamaha did make real analog 
synthesisers for a long time before the DX7. The DX oscillators 
were implemented on digital integrated circuits, and were clocked,
summed, phase modulated and converted with DACs. They were not
digitally controlled continuous oscillators, so alisasing was 
both a theoretical and practical problem.

And yes Tom, IIRC you could get some nasty aliasing
from a DX patch quite easily.
 

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Recording very long sound files

2011-09-14 Thread Andy Farnell
On Wed, 14 Sep 2011 12:09:45 +0200 (CEST)
Laszlo Toth  wrote:

> Our problem is that the number of samples differs significantly
> between the recordings collected on different days (the largest diference
> is up to 5 minutes!). 

What is the condition for exit?

You should write 44100 * 60 * 60 * 8 = 127008 samples

Decouple the aquisition code from the disk streaming and
be sure you can write 127008 zeros into a test file
consistently first.

You are using long long to index this aren't you?



> The sound is digitized via an Alesis MultiMix4
> mixer, and gets recorded using our own Java code on a laptop. Does anyone
> have an idea what's going on? Is the sampling rate of the sound card
> instable? Or the internal clock of the laptop is not reliable? Or the
> recording software looses packages somehow? What would be the best way to
> debug this?
> Thanks a lot,
> 
>Laszlo Toth
> Hungarian Academy of Sciences *
>   Research Group on Artificial Intelligence   *   "Failure only begins
>  e-mail: to...@inf.u-szeged.hu*when you stop trying"
>  http://www.inf.u-szeged.hu/~tothl*
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-12 Thread Andy Farnell


If you are heading towards DX7 style FM then notice
that only two of the oscillators (2 and 6) can have
feedback, and that this is self feedback. There are
no arbitrary feedback paths containing more than one
node and no nodes that aren't leaf nodes, so none have
self feedback + modulation from another oscillator.
(See Chowning and Bristow)

Most of the time these stand in for band limited saws
(because 1:1 self modulation produces all harmonic) 
or as noise sources.

To be clear whether the aim is to build a generally 
re-patchable set of oscillators that is incidentally 
an FM synthesiser, or whether the goal is to emulate a
classic behaviour will help you decide.
For the former, add noise in place of self modulating
oscillators. For the latter, there are 32 familiar
arrangements (certainly more are possible, but I guess
they can be shown redundant). These can be stored as a
set of weights and orders in a structure passed in to the
main loop and on each block you compute _all_ the 
oscillator phase increments, lookups and results.
You shouldn't need to reduce your block size to 1, 
or the function call overhead is on every sample!



On Mon, 12 Sep 2011 17:04:27 +0200
Andre Michelle  wrote:

> I guess, that's it and it makes sense now to me.
> 
> This can make FM synthesis quite cpu expensive however. Everything must be 
> included into a big for-loop.
> 
> Thanks.
> 
> --
> aM
> 
> 
> On Sep 12, 2011, at 4:56 PM, Brad Smith wrote:
> 
> > I always understood FM feedback to require a 1 sample delay. The
> > output of the current sample must affect the output of the next
> > sample. If you put, say, a 64 sample delay, it is like you are running
> > 64 delayed feedback loops in parallel, each of them unrelated, and
> > there won't be any coherence between them (probably it just turns into
> > white noise once the feedback gets going).
> > 
> > I think this precludes being able to buffer individual operator
> > outputs, though; you need to calculate the whole chain on each sample
> > to produce the feedback value needed for the next sample.
> > 
> > -- Brad Smith
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, 
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Multichannel Stream Mixer

2011-08-30 Thread Andy Farnell

Assuming you're using double precision and
summing less than a few million channels. :)


On Tue, 30 Aug 2011 17:29:29 +0200
Johannes Kroll  wrote:

> On Tue, 30 Aug 2011 15:34:47 +0200
> "StephaneKyles"  wrote:
> 
> > I just wonder is summing the buffers is just enough.
> 
> It is. 
> 
> Add them.
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] music-dsp Digest, Vol 92, Issue 7

2011-08-26 Thread Andy Farnell
otherwise interesting CCRMA web- 
> > pages, are a matter of more organized courses and deeper  
> > contemplations on the relevant (actual, NOT standard mathematical  
> > simulation models and erroneous use of sampled impulse theory)  
> > theories. Probably that belongs at PhD level often, though that  
> > didn't stop a lot of well known artists from delving seriously into  
> > the subjects over the years in the day.
> >
> > Theo.
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book  
> > reviews, dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> Dr Victor Lazzarini
> Senior Lecturer
> Dept. of Music
> NUI Maynooth Ireland
> tel.: +353 1 708 3545
> Victor dot Lazzarini AT nuim dot ie
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Electrical Engineering Foundations

2011-08-24 Thread Andy Farnell

You raise a good point Theo. Useful first year texts on this 
often go under the title of "Signals and systems" these days. 
It needs a good teacher to animate this in an engaging context.

When I took CS&EE back at college, I'm fairly sure UCL was
one of the few places still doing the joint honours thing,
where you practically did two degrees running backwards and
forwards between lectures in the computer science department 
and the engineering faculty... complete with timetable clashes.

Yet, in some ways the worlds of engineering mathematics and 
algorithms never fully met, except on the breadboard when
making an analogue filter for my homebuilt DAC or something.
20 years later I still don't connect some of the concepts easily. 
With hindsight, growth and experience, plus working as an 
educator, I now see how difficult it is to fit these things
into a coherent syllabus. On some courses the whole of sampling
theory and LTI is reduced to one or two weeks. Many have
never seen an actual resistor and capacitor, nor will they.

I think that we must remember that things like audio and video
development for entertainment is highly interdisciplinary. It is not
possible, however you cut a 3 year slice of undergraduate time,
to preserve depth and breadth. 

It's hard to know a good solution. More modular, cross department
pick and mix? That only works at big schools with a lot of staff.
But then how many students choose to take a hard option in 
convolution when it clashes with post-production? And not just
because it's abstract, but because knowing Pro-Tools seems a much
more immediately valuable currency in the job market. 
We have to compete with so much dazzle and corporate BS and spiritual
alienation from work these days that sometimes I think foundations
are scoffed at.

So, there appears a tension, in this highly student oriented age.
In some ways, those outside college who have time and motivation
to self teach have the best options. It can be project driven.
Eventually you hit a problem hard enough to make you reach for
an orthodox DSP or engineering math book. But that won't happen
to most until 10 or more years down the path, by which time one
loses the ability to easily absorb that kind of material.
So young kids used to go to college to be _made_ to take the 
medicine that will be good for you down the line. Now motivations 
have changed, and even more so with changing financial circumstances.

The problem is this; I recall a long summer sweating in a
lab doing logic problems, De Morgans, Karnaugh maps, simplifying
propositions. All seemed very proper at the time. I'm sure it was good
for me. Sure it sharpened my mind for philosophy and better coding
and more. But never in my career have I needed to build a half-adder. 
At school these days I have to justify every minute of my time
with a student.


So my question for you Theo ... put on the profs hat...

How would you make these very powerful and (to me) wonderful
and mind boggling things in signals theory interesting and relevant
in an age where we have to compete with autotune and facebook?

Andy


On Wed, 24 Aug 2011 18:00:40 +0200
Theo Verelst  wrote:

> Just to maybe put some people at ease and hopefully arousing some 
> discussions, I'd like to point the attention of a lot of people in the 
> DSP corners of recreation and science, hobby and serious research to the 
> general foundations for Sampling Theory and Digital Signal Processing 
> and possibly information theory and filter knowledge.



-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] (very) old limiter thread

2011-06-27 Thread Andy Farnell

I don't remember that exact thread Bram. 
But there is an old technique of using a side chain
with more than one level detector, which are then
summed (averaged) to drive the gain stage.
(factor out the gain stage)

This is what you might call a kind of parallel 
compressor/limiting. Popularly known "parallel 
compression" is with the special case where one
copy is a direct signal mix, or in strict terminology
it is a dynamics _effect_ not a dynamics _process_. 

But instead, you can have different settings for
each level detector. The typical setup is to have a 
short (fast response) one for transients and a more 
sluggish one for the slowly growing sustained parts. 
This way you get control over "punch" and "body" 
separately.


On Mon, 27 Jun 2011 19:45:38 +1000
"Ross Bencina"  wrote:

> Hey Bram
> 
> If I remember correctly there was a multi-envelope-follower algorithm 
> proposed by James Chandler Jr a while back.
> 
> The idea was to have slow-attack slow-release envelope followers to deal 
> with long-range high level signals and fast attack/release to deal with 
> transients. I'm not sure if it was cascaded the way you describe below 
> though. I thought it was more like:
> 
> input -> limiter1 -> limiter2 -> ... -> limiterN -> output
> 
> Where each limiter had increasingly fast attack/release, so the final 
> limiter would limit only transients that got through the earlier (slower) 
> stages.
> 
> Obviously you can optimise this in different ways.
> 
> And of course i could be remembering things wrong.
> 
> You can do similar things with multiple peak-hold running max filters using 
> the O(1) algorithm I posted a while back.
> 
> Ross.
> 
> - Original Message - 
> From: "Bram de Jong" 
> To: "A discussion list for music-related DSP" 
> Sent: Monday, June 27, 2011 6:16 PM
> Subject: [music-dsp] (very) old limiter thread
> 
> 
> > Hello all,
> >
> >
> > quite a while (years rather than months!) there was this long
> > discussion about limiters at musicdsp and someone mentioned a design
> > of cascading envelope followers. I'm thinking it was RBJ, but not too
> > sure... The design was something like:
> >
> > e1 = env_follower1(abs(signal))
> > e2 = env_follower2(e1)
> > ...
> > eX = ...
> >
> > limiter_signal = max(e1, e2, ...)
> >
> > if (limiter_signal > thresh) ...
> >
> > Can anyone recall this conversation? I'm looking for the original!
> >
> >
> > - Bram
> >
> > -- 
> > http://www.samplesumo.com
> > http://www.freesound.org
> > http://www.smartelectronix.com
> > http://www.musicdsp.org
> >
> > office: +32 (0) 9 335 59 25
> > mobile: +32 (0) 484 154 730
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, 
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Long-term average spectra; testing A/D converter

2011-05-18 Thread Andy Farnell


First thing that comes to mind is you might save
a lot of time by contacting Brian at echonest, as
they basically have a big db of musical feature
analysis covering different genres etc. 

If you think about dub reggae vs a brass band then
you'll agree its hardly good science to assume a one
size fits all spectral profile for music in general.

BTW this does seem a familiar problem, and maybe searching
the archives you will find someone else who needed to
compute broad programme spectrums for test purposes.


On Wed, 18 May 2011 01:42:04 -0700
Jerry  wrote:

> I'm trying to discover any work about long-term average spectra of speech and 
> music. For speech, I found "Some remarks on the average speech spectrum" at
> http://www.speech.kth.se/prod/publications/files/qpsr/1964/1964_5_4_013-014.pdf
> which is work done in 1964 using the so-called "chorus" method where a bunch 
> of people spoke at once. I think this is good stuff and a decent 
> approximation of the spectrum is a second-order lowpass filter with the -3 dB 
> point at 1 KHz.
> 
> I'm curious to know if there are other papers in this area as well as 
> long-term music spectra.
> 
> I know I've heard this discussed somewhere, probably an Audio Engineering 
> Society convention, but I can't seem to pull it out of the rusty neurons.
> 
> What I'm really interested in is testing an analog-to-digital converter with 
> believable signals, and such signals need to be computable, i.e., not played 
> from a recording. Certain A/D measurements are highly dependent upon the 
> signal used and I'm having trouble deciding on a testing approach.
> 
> Jerry
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] good introductory microcontroller platform for audio tasks?

2011-04-20 Thread Andy Farnell


Evaluation copies for devs and education... money
better spent than any advert in a trade journal 
for making loyal customers :)

What I'm thinking is something for the students who want
to build guitar effects pedals. These days we don't have 
enough time on syllabus to cover the analogue electronics 
theory. Every year I get a few and unless they are strong
electronics hobbyists with plenty of soldering iron time
and pile of things they already built, the usual outcome
is I steer them to doing a software project. 

There's a gap between my own high and low level experience 
of full systems or building computers from chips, and it's
amazing the integration that has happened in the last few years.
Today I was reading about "DaVinci" architectures and really
startin to see the potential in things like the Beagle board
and other "Linux + DSP on a chip" thingies, it's exciting
to catch up in this area.

If they were cheap enough to have 5 or 10 lying about the
students could use their C and DSP to do non-desktop 
projects and have a revival in stomp boxes as an alternative
to the iPhone/Android app that most of them want to do this year.

Happy to hear some advice from anyone who has tried similar
in an educational setting. 

cheers,
a.


On Wed, 20 Apr 2011 21:50:52 +0100
Gwenhwyfaer  wrote:

> On 20/04/2011, Andy Farnell  wrote:
> >[re dsPIC] These look really nice and affordable.
> 
> They're even more affordable when you consider that Microchip will
> happily send you ten of them for free three times a month. :)
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] good introductory microcontroller platform for audio tasks?

2011-04-20 Thread Andy Farnell

These look really nice and affordable.

On Wed, 20 Apr 2011 08:12:06 -0700
Eric Brombaugh  wrote:

> Microchip dsPIC

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] good introductory microcontroller platform for audio tasks?

2011-04-20 Thread Andy Farnell

An important factor is your deployment goals. There's
some difference in philosophies between developing
for a specific semiconductor/chip and working with
a system (board level architecture) in mind.

If you aim to produce something very small and cheap,
like a wearable or abandonable bit of art or fun maybe like
this crazy thing someone posted on Pd list yesterday

http://www.hans-w-koch.org/installations/thankyou.html

then a PIC/uC would be the way to go because you could
make them for pennies with a bit of clever sourcing.

But when you look around at the huge array of SOC
and SOM stuff at the "Little board" or "Stick" scale
it's hard not to be taken with a route where you can
have a full Linux system in the palm of your hand.
You can telnet/ssh into it and use FTP instead of a 
bespoke development tool. Most come with a viable
kernel already rolled for supporting all the on-module
devices, usually with the Busybox + uClib offering,
and some even have ambitious claims to run heavier 
distros. 

What I noticed about the Beagle Board the other day,
which I had always thought of as "just another SBC" is
that it has a C64 DSP on there just begging to be turned
into an effect pedal or synth. Anyone gone that road
with the Beagleboards yet?

Andy

On Tue, 19 Apr 2011 08:56:15 -0700
Kevin Dixon  wrote:

> Hello list,
> I'm experienced in developing DSP routines in C/C++ for desktop
> computers, and have gotten my feet wet on TI MSP430, doing boring
> things.
> My project chiefly involves implementing a MIDI interface and also
> implement an LFO.
> I also need to read analog waveform, but sound quality is not really
> important, I'm guessing an 8bit AD would suffice for the task. That
> being said, in the future I will inevitably want to do some real DSP,
> so families that can support 16-24bit AD at higher sample rates would
> be preferable.
> 
> In summary these are my requirements:
> -UART
> -A to D
> -D to A
> 
> My EE friend is recommending I go PIC, but the Arduino looks
> promising, especially for fast return on effort :) I guess startup
> cost is an issue too, I'd like to be up and running for about 50 USD.
> 
> Any thoughts/recommendations? Thanks,
> 
> -Kevin
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Fwd: digital EQ (passive) adding gain

2011-03-12 Thread Andy Farnell

How do you know these filters don't have a resonance?

That could explain your results. 

Chances are these are digital versions of classic analogue
responses with a peak at the cutoff. For your test to make 
sense you need a perfectly flat passband response.

Try measuring the dB energy in each band before and after
processing.

andy

On Sat, 12 Mar 2011 14:59:15 -0500
Eldad Tsabary  wrote:

> Hello all,
> 
> While looking with students on methods of increasing dynamic range of
> pieces, we EQed individual tracks that have no business in the lower
> range with a high pass filter (2nd order) at 120 Hz. The idea was that
> getting rid of rumble from all of the tracks (except bass-range tracks)
> can both clean the overall mix and reduce measured amplitude peaks of
> individual tracks without losing actual loudness (thus allowing to bring
> the entire mix to a louder RMS).
> This, to my surprise, didn't work at all. In all cases I tried so far,
> instead of reducing the dB measurement, the signal after processing had
> a higher dB peak measurement (I used non-realtime EQ in order to use
> higher quality DSP but also to be able to measure the overall signal).
> 
> It doesn't make much sense to me because the HPF is supposedly just a
> passive filter. Using HPF in  Pro Tool 8's EQ on a drum overhead track
> reduced the overall audible loudness and got rid of the bassy sound of
> the kick. It sounded softer but strangely it measured as 2 dB higher
> than the original signal.
> 
> I tried the scientific EQ in Adobe Audition, which is supposedly a well
> designed low phase filter (same setting - 2nd order, 120 Hz), and it
> resulted in only a 0.5 dB increase - but still an increase.
> 
> Does anyone know of this? Anyone has knowledge or ideas about the
> possible cause of this?
> 
> The several reasons that I have been thinking of are:
> 1. quantization error - though it seemed to me waaay too much of an
> increase
> 2. some individual transients that were somehow corrupted in the process
> 3. dc offset
> 4. phase issue
> 
> Any insights would be helpful
> Thanks
> Eldad
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Audio Mostly 2011

2011-03-10 Thread Andy Farnell
*
Audio Mostly 2011– “A conference on interaction with sound”
in co-operation with ACM - SIGCHI
September, 7 - 9 – Coimbra, Portugal
*

CALL FOR PAPERS – AUDIO MOSTLY 2011 – 6TH CONFERENCE ON INTERACTION 
WITH SOUND

Audio in all its forms – music, sound effects, or dialogue - holds
tremendous potential to engage, convey narrative, inform, create 
attention and enthrall. However, in computer-based environments, for example
games and virtual environments, the ability to interact through and with sound 
are still today underused. The Audio Mostly Conference provides a venue to 
explore and promote this untapped potential of audio by bringing together audio 
experts, content creators and designers, interaction designers, and behavioral
researchers.

See here for more info.

http://www.audiomostly.com/


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] adaptive filter pitch detection?

2011-02-19 Thread Andy Farnell
On Fri, 18 Feb 2011 17:52:43 -0500
Stephen Sinclair  wrote:

> What is the nature of your input signal?  Is it harmonic? Does the
> frequency change quickly or slowly?  These could be important details.

For further clarification, do you want continuous pitch detection,
or pitch recognition, from a set of markers, like musical notes?
Many people who say they want pitch detection want pitch recognition.
Differences between detection and recognition come under, amongst 
others, Quine, Shannon, Weaver and Pierce.

Paraphrasing Quine

[...the central theory of communication] makes sense relative
to one or another pre-assigned matrix of possibilkities, some
or other checklist. You have to say up front what counts as
a feature.

If you want to know if specific notes have been hit then a
filterbank method combined with zero detection will work pretty
quick.

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Modular synthesis percussion?

2011-02-17 Thread Andy Farnell

If you can translate from LISP then a good start to be made here
https://ccrma.stanford.edu/~sdill/220A-project/drums.html


And these additive patches, like Risset's drum, are still 
great templates for other experiments

http://www.codemist.co.uk/AmsterdamCatalog/02/index.html

What I found makes a significant difference in making
good percussion is having a genuine exponential envelope
generator.


On Thu, 17 Feb 2011 19:01:21 +0100
Gabor Pap  wrote:

> Hi,
> 
> There are some percussion-related articles in the excellent Synth
> Secrets series (by Sound On Sound):
> http://www.soundonsound.com/sos/allsynthsecrets.htm
> 
> Regards,
> Gabor
> 
> On Thu, Feb 17, 2011 at 6:51 PM, Alan Wolfe  wrote:
> > Hey Guys,
> >
> > Does anyone know how to do percussion sounds with modular synthesis?
> >
> > I'm talking about just using VCO, VCA, EG etc, not an actual
> > percussion module (:
> >
> > I've been looking around and all the info i can find shows people
> > using percussion modules which isn't so helpful if you only have the
> > basic tools at your disposal hehe.
> >
> > Or i guess, info about percussion synthesis in the DSP realm at all
> > would be nice too if anyone can point me at any of that.
> >
> > Thank you!!
> > Alan
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews, 
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] looking for a flexible synthesis system technically and legally appropriate for iOS development

2011-02-16 Thread Andy Farnell



Both impressive runs. Dan, I think raw uptime is
the currency here :) The 1000 year piece is a cool idea.

How about a musical tower of Hanoi?

Perhaps the success of installations like these
has to do with the systems on which they run, 
since we're often building in an embedded or minimal 
context.

The rig I put together with RTCmix was a stripped down 
Debian, with not much more than coreutils and what it 
took to build and run the barebones.


On Wed, 16 Feb 2011 19:04:28 +
Dan Stowell  wrote:

> Well this one's been going since 1st Jan 2000, using SuperCollider 2 I 
> believe (though I don't know if the actual machines have been running 
> continuously or if there is replacement):
> http://longplayer.org/
> 
> Dan
> 
> On 16/02/2011 18:21, Victor Lazzarini wrote:
> > I wonder what is the record of synthesis servers running installations.
> > A Csound server is running
> > this installation http://www.flyndresang.no/en/om/ since 2006 and will
> > finish in 2016. Fairly impressive, even if I say so
> > myself!
> >
> > Victor
> > On 8 Feb 2011, at 20:55, Andy Farnell wrote:
> >
> >> Also on Brad's RTCmix, I have never found anything more reliable
> >> for basic functions, in a test I had a "sound server installation"
> >> mixing wavs to make random ambient textures, it ran for 4 months
> >> without a glitch.
> >
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book reviews,
> > dsp links
> > http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> 
> -- 
> Dan Stowell
> Postdoctoral Research Assistant
> Centre for Digital Music
> Queen Mary, University of London
> Mile End Road, London E1 4NS
> http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm
> http://www.mcld.co.uk/
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-02-08 Thread Andy Farnell
On Tue, 08 Feb 2011 01:04:53 +
Richard Dobson  wrote:


> I can't put a lot of time into this reply, too much else to do. But I 

Understood. Me too, just a few days off for hellraising and
then back to the grind too. Appreciate your banter on this Richard.

>  Engineers know how to do more with less, while the 
> rest of us manage to do a little with rather a lot. Charitable enough?

Sure, sure, clear enough I got that. And it's a compliment to all engineers.
My point is that it makes engineers one dimensional if the only consideration
is money. Sometimes us engineers can do things safely (in ADA) or creatively,
or with other considerations depending on what's important.


> Except that 
> all of a sudden people want to run a music studio on their iPod. 
> Engineers needed!

Yep, well theres a few stories I can tell you :)
RjDj just passed the 3 millon download mark with the
inception app. 


> I never suggested the system was working. Indeed I agree that it is not.
> get it working properly so that it ~can~ protect the little guys.

We can do that.

What we have in common is far more valuable than anything else.

It's hard to defend a radical position without seeming a precious,
and self-absorbed asshole, but I am very sincere and have thought it through
for many years. 


> Now, it needs serious hardware to get it viable for the consumer in real 

Nothing like an idea who's time has come. Maybe its the right idea, but wrong 
time
and you need to hang in there. I spent 5 years mulling over theories of 
procedural
audio, reading papers and books by Perry Cook and others, impatient with
why people didn't think like what I was seeing in game audio possibilities.
It wasn't until 2005 that it was obvious things were coming to fruition,
and 6 years further still having to work hard to push forward, but there's
an unmistakable trajectory to the project now.
I think there's a lull in music production, as an art, except on fringes
at present, and new opportunities for new synthesis methods will come
on the wave of the next revival.


best,
andy
-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] looking for a flexible synthesis system technically and legally appropriate for iOS development

2011-02-08 Thread Andy Farnell

+1 for Zen Garden, because I was alongside Martin while 
he developed and know the code is quite lean and clean, designed
for mobile in mind (Android and iPhone) and he is quite liberal
about licensing. 

Also on Brad's RTCmix, I have never found anything more reliable
for basic functions, in a test I had a "sound server installation"
mixing wavs to make random ambient textures, it ran for 4 months 
without a glitch.


On Tue, 08 Feb 2011 16:08:11 +0100
Thomas Strathmann  wrote:

> On 2/8/11 15:39 , Miles Egan wrote:
> > I'd suggest you seriously consider rolling your own. It's not *that*
> > hard to build a simple audio graph system and you won't be tied to a
> > alien system with design priorities likely quite different than your
> > own. Interfacing with a big, complex, multi-platform audio environment
> > like CSound or Supercollider is going to cause more headaches than it
> > prevents, I expect.
> 
> Seems that no one mentioned Zen Garden 
> (https://github.com/mhroth/ZenGarden) yet. It's basically a headless C++ 
> reimplation of PureData. Maybe it'd be worth a look.
> 
>   Thomas
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a multiband compression experiment

2011-02-08 Thread Andy Farnell
That sounds lovely Theo, really transparent on my 
monitors.

On Tue, 08 Feb 2011 20:51:42 +0100
Theo Verelst  wrote:

> Hi all
> 
> Using my new I7 motherboard's 192kS/s converters I thought I'd record a 
> short jazz piece to test a multiband compression scheme at that sample rate.
> 
> So I recorded 3 pieces with a Kurzweil PC3 into rosegarden, using a 
> Lexicon compression/reverb, mixed them together and fed them through the 
> 15 band filter/compression bank, and converted the result to a 44.1 mpg3:
> 
> http://www.theover.org/Kurz/ehwyg.mp3(3.9 Mega Byte)
> 
> The song is made after the first part of "Everything Happens When You;re 
> Gone" from the famous "Don't try this at home" from Michael Brecker, 
> which I studied long ago.
> 
> It works good, and I needed no additional production means or tricks, so 
> the whole path appears to work neutral.
> 
> Theo Verelst
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-02-07 Thread Andy Farnell
On Mon, 07 Feb 2011 02:29:29 +
Richard Dobson  wrote:

> On 06/02/2011 18:53, Andy Farnell wrote:
> > Since there is nothing to divide the line between "this" virtual device
> The DX7 is an automaton. But in principle it can be modeled by a UTM. 
> That does not mean there is no dividing line between them.

Hi Richard, 

Thanks for the considered and thoughtful reply. I want to jump quickly 
though the following points because it is the end of your message
where things get interesting.

> > "Well they're the same thing", you may say.
> Of course, I ~wouldn't~. It is best in these sorts of discussions not to 

I'm sorry to put words into your mouth. Yes, I need to be careful
with any kind of theatre... that voice was the idealised interlocutor...
not you specifically... how _most_ people react to that.

But I assert, they are the same thing in practice.

We are arguing as experts of course, which actually makes this
interpretation more difficult, and less useful. And I'm ironically 
guilty of the same behaviour I hate by substituting the general
case for the specific. Yikes. sorry!

> Speed and acceleration are likewise 
> related, but not the same - one is the rate of change of the other. 
> ...
> You will have to give your definition of "congruent" - speed and 
> acceleration are I suspect not "congruent", unless all you mean is that 

Yes I mean they are trivially derivative. Note "trivially". And this
is what demands a symbolic representation to "put your money where
your mouth is", at which point, as I continue to argue, a patent is 
neither effective nor appropriate (compared to copyright).

... much that is interesting and true snipped

> In this respect I cite the often-quoted definition of an engineer: 
> "someone who can build for two bucks what anyone can build for
> three". 

That's one pretty narrow definition of an engineer, and a little
uncharitable. Sure you can get those kind of engineers to build
you a Tahoma bridge or solid fuel boosters for your space shuttle,
but the dollar saved will cost you two. Here's a real engineer for ya:

http://www.tc.umn.edu/~frede005/Brunel.html

Notice the hat. That's what you pay the extra dollar for.

I prefer this definition by Mr N. W. Dougherty: "The ideal engineer is a 
composite ... He is not a scientist, he is not a mathematician, he is
not a sociologist or a writer but he may use the knowledge and 
techniques of any or all of these disciplines in solving engineering 
problems."

Sometimes our engineering problems are social. But a little pressure
here, a little oil there, and always the gentle force of the better 
argument... :)

> > Should Yamaha have been able to monopolise the use of FM in
> > music synthesis as a result?
> > Categorically no! No! No! No!
> But they couldn't, and didn't. Wherever did that idea come from? 

Do a search on "Yamaha Patent FM". Does that look like a
widespread interpretation that is clear and unambiguous to you?

My argument is simple at this point. Development was stifled.

It is the effects, not the letter of the law that interests me.
Those (ordinary developers) who are threatened by a patent do not 
have the financial means to get clarification, so there's no point
raising the possibility of challenge here.

I just came back from a meeting at a major research university
where they are so afraid of submarine patents in "unrelated
areas" that they have no choice but to "jump in an take a risk"
because the complexity and cost of search is overwhelming.
Does that sound like a system that is working?

To use a drastic analogy from tin pot dictator politics; if I
allow one group of people to carry arms and legalise "self defence",
that has the chilling effect of another group staying at home on
election day. Without writing any laws to some effect, I can obtain 
that effect.

It is the effects of software patents that is the problem. 
They cannot avoid having this effect because they are insufficiently 
well defined. If they were so defined they would be written as
code and qualify for copyright not patenting. Thus I return
to my argument.

> sell a ~physical implementation~, i.e. in the form of a real-time 

Are you saying that no software implementations of FM, whether in
stand-alone or plugin form, whether free or for sale, were ever
held up or stifled by the existence and interpretation of the
Yamaha "FM patent"? What about hardware implementations that didn't
infringe on Yamaha's patent but never made to market because of
fear, uncertainty and doubt? That _never_ happened right?

> I think there almost certainly are strong arguments to be made against 
> at least the mechanisms and implementations of software patents, but I 
> suspect these 

Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-02-06 Thread Andy Farnell
On Mon, 31 Jan 2011 21:23:52 +
Richard Dobson  wrote:

> On 31/01/2011 12:53, Andy Farnell wrote:

> Er, they aren't, and never have been.

Hey Richard,

Sure they "aren't allowed". But they de facto _have_ _been_ 
allowed, and that's why we're having this discussion.

If the key to the argument for this stumbling mistake is the ill
formed notion of a virtual device then it will fall over easily.

Software patents were simply never a considered, rational move.
Instead they are a sleepwalk into a dream of market appeasement, 
based on a foggy understanding of the relationship between a real 
"device" and a virtual one. As you say they "recognise the notion", 
but haven't "thought about the definition". 

Since there is nothing to divide the line between "this" virtual device 
and a Universal Turing Machine, then there's no partition between the 
abstract and the concrete. In the former case the machine definition will 
be written into the claim which is concretized and becomes copyrighted, 
in the latter case the claim fails the "idea" (abstractness") test.


 --snip 

> Hence the classic original FM patent. It uses multiplication (can't be 

It's cool you picked FM, it helps develop an argument surrounding 
ambiguity and broadness:

In fact there never was a patent on FM. There was a patent on phase
modulation based on manipulation of an accumulator in a specific
way.  

"Well they're the same thing", you may say.

"Exactly!" I say.

So, why do we make this mistake? Because  FM and PM are
congruent, that is to say there are different mathematical
representations with possibly different code flows that
amount to the same thing. 

"Frequency Modulation" became the marketing phrase. Maybe
because a bunch of execs thought it sounded cooler. But
that's not the end of it because, I forget who, maybe 
Beauchamp or Arfib, who showed at a similar time,
that FM and wave-shaping could be considered congruent. You can
see modulation as dynamic wave-shaping in the case that the
shaper is another oscillator rather than a table. Therefore,
it really became an interpretation (in code) based on whether
you were using a stored or generated function whether what you
call FM or wave-shaping is your technology. This is why in my book
I was quite clear to draw a distinction, as is found throughout
design theory, between model, method and implementation.

So what significance does this have, given we all agree that so 
called "ideas", abstract mathematical formulations, even if they 
are functional, cannot be admitted as patents in the absence of a 
concrete design and purpose?

Since Aristotle, a trisection of realms, often encountered in social 
and psychological enquiry, distinguishes the "real", the "imaginary"
and the "symbolic". Symbols and the rules of their combination
are a shared, public realm, though unique ideas may be communicated
by combinations of atomic symbols.  This is something we grasp easily 
in computer science and has bearing on much of software engineering. 
Classically the symbolic mediates the real and the imaginary. 
Outside computing it's normally seen as a madness or social malady 
when there is sufficient confusion of any of these realms. 

The patent system does not properly distinguish these
things for software. It was never designed for software. Software
(purely symbolic) was shoe-horned into the patent system to
meet industrial demands much too fast. It stops at the
symbolic and merely implies the real (design). Since both abstract
and concrete symbolic forms are possible there is a SIGNIFICANT
AMBIGUITY surrounding any attempt at a "software patent".

What did this cause? 

IMHO, a great injustice. One of many mischiefs software patents
perpetrate echoing through the last decade.

Should Yamaha have been able to obtain the patent they
did? Let's say "yes", (keeping aside my other objections to software 
patents).

Should Yamaha have been able to monopolise the use of FM in 
music synthesis as a result? 

Categorically no! No! No! No!

Notwithstanding that there were other uses of FM in music 
synthesis prior to the Yamaha patent, the point is
that the interpretation of the patent, in reality,
by people who were not qualified or diligent enough to
understand its narrow symbolic meaning, was too broad.

You may conclude development was stifled.

Why? Because the patent, while for a very specific implementation, 
was interpreted and defended as a claim on a broad class of methods.
Everything else in that class was effectively prohibited during 
this time.

Were Yamaha rationally justified in filing a patent, which was
on a design, but was interpreted as being on a process? I'm sure
they didn't intend to muddy this boundary but given that 
it was actually a VL

Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-31 Thread Andy Farnell
On Mon, 31 Jan 2011 12:11:57 -0500
robert bristow-johnson  wrote:

> Andy,

> must such a thing be a physical object?

It's a tough one Robert, but that's the way I learned it.

Patents are for things, copyright is for information,
trademarks are for unique brand identifiers.

What about ideas?

"You don't get protection for ideas. Ideas are ten
a penny." 

I remember the words very well. And my law teacher 
wasn't any random dabbler, she was some high ranking 
city barrister.

I spent most of my life understanding patents, copyright
and trademarks this way, tangibility and manifest form were 
always essential tests of design against idea. 

The boundaries have really moved in the last 15 years, 
and so rapidly into absurdity from the USA, a descent
into complete madness in my opinion. I think this was a 
mistake and I continue to oppose its spread and its
devastating effect in the US.

> what kind of "thing" (if any) would you say *is* patentable?

If you asked me 15 years ago I would have answered quickly
and confidently, "Inventions". Back then I thought I knew
what two things were for sure, patents and inventions.

You know: steam engines, nuclear reactors, Rubik cubes...

The idea of incentives to "promote innovation" seemed 
noble and socially just too.

Now I'm not so sure. 

For a start my political views have shifted. I don't see 
government coercion and protectionism as justified any more.
All institutions have lost credibility in the last decade
but particularly QANGOs that are in bed with the corporations
have revealed themselves as incompetent and rotten to the core.

Markets won't stomach regulation necessary to avoid a multi-trillion
dollar global financial crisis and bank bailout, but we will happily
allow regulation of innovation and research and allow people to 
starve in countries to defend the opposite principle. No way!

I am guilty of not realising until quite recently that
the arguments against patents are real. Patents are 
killing people.  Right now there are people in the world dying
of diseases because they can't get medicine that could be made
by local chemists. All to defend an ideal that is supposed to
stimulate social progress.

As Ross points out, things have got much more complex. 
People have become overwhelmed by complexity and paralysed.
Injustice is everywhere, and yet the systems are too 
simplistic to withstand corruption and plummeting standards.

Are patents still useful and desirable in the 21st century?

And yes the question could be even broader, sure, is the whole
concept of intellectual property ready to be reexamined?

So, Ross, you are right, there are deeper, more troubling 
questions on my mind, but I'd like to keep focus on the
software patent problem if we can, because many of us are 
experts here and deserve to have something to say.
robert bristow-johnson  wrote:

w/ respects,
andy--
-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-31 Thread Andy Farnell
On Tue, 1 Feb 2011 05:37:30 +1100
"Ross Bencina"  wrote:

> Hi Andy
> Can you clarify what you mean by "a computer program"? 

Something you could print out on a piece of paper
that's all. Great if would run :) And you had a machine 
that could run it too :)

For all scenarios A, B, C and D you outline you would get
no protection (of your algorithm). The point seems to
keep slipping away from you, so lets nail it.
Under my regime you get protection for your design, 
but _not_ for your idea. If you choose to game the system
and call your idea the design you don't get to play at all.
You can call the design the idea of course. And this is a 
government run monopoly right, so remember, if you don't 
like it men with guns will mess you up.

To be charitable, I'd like to have you protected in case A,
and I think many courts hearing a copyright case with that
shape would find for you. Trivial translation shouldn't
get one off the hook for blatant plagiarism there. 

> Scenario: I invest 1000s of person-years devising a completely original 
> ultra-fast zero-latency convolution algorithm. I take 2 days to code it in 
> C.  I publish my new invention as copyrighted C code.

This is what bugs me Ross. Practically all such arguments are sublime, 
hypothetical, academic, intellectual. They are always based on some 
imagined injustice consequent to "if I had a great idea". If you write
programs and do DSP research you'll know very well that things never
work out that way. It's a complete straw man. You would never spend
thousands of hours on some abstraction, and then realise it in 2 days.
The code and the algorithm are intricately linked, as are the
design and the general form or theoretical model in all patents
which are interesting to study, like the steam engines for
example. At that point the code would contain your value
and any attempt to generalise it would be a land grab beyond what
you deserve. After all you haven't _implemented_ all possible
alternative writings, even if as a consequence of getting your code 
you have touched upon a model that implies them. You can't build a 
specific steam engine in isolation of the theoretical model of the 
general gas engine, but that doesn't get you the right to patent 
the latter because it fails a generality test.

In 1829 a machine, known as Stephenson's Rocket travelled the
Liverpool-Manchester railway. Most people think of that name when
they see a steam railway engine. It was a synthesis of many existing
concepts, and new ones, brought together by hard physical and mental
labour with a concrete goal in mind. Crucially it was not a drawing, 
nor a written description of a thing, nor any mere representation of a 
thing, but it was a physical thing in itself. Crucially it was
not an attempt to lay claim to the body of theory or physical
principles that also allowed the work of Papin, Watt and Newcomen.
All those inventors obtained patents, and here the system drove 
forward innovation. Nobody tried to patent "the steam engine 
principle" as they would today and hold the entire industrial 
revolution to ransom.

all best,

Andy

-- 
Andy Farnell 



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-31 Thread Andy Farnell

Hi Ross,

> Are you suggesting by stating the above axiom that algorithms are _simply_ 
> ideas and that for this reason alone they shouldn't be patentable?

Yes I am, you've got it. 

An algorithm is unsufficiently concrete to deserve a patent, it is an
abstraction, a generalisation. 

An algorithm is not "performing in an active role as an executing
computer program", not any more than an imaginary line like the
equator can be used to tie up a bundle of sticks. 

It would have to become computer program to do that.

At that point it would meet the requirements for copyright
which would be sufficient for its commercial protection.

cheers,
Andy


On Tue, 1 Feb 2011 02:19:50 +1100
"Ross Bencina"  wrote:

> Hi Andy
> 
> Andy Farnell wrote:
> > AXIOM: Ideas should not be patentable. Period.
> >
> > Do I need to explain this?
> 
> Sorry, you've lost me a bit here. Pehaps you do need to explain it.. see if 
> I'm twisting your words below or if you find that I'm addressing your 
> position (of course I don't expect you to agree with my argument):
> 
> Are you suggesting by stating the above axiom that algorithms are _simply_ 
> ideas and that for this reason alone they shouldn't be patentable? Is that 
> the basis of your objection to software patents? That the patent system 
> should only apply to mechanisms that operate soley in the world of atoms 
> (like a design for a spiral ham slicer)? and not to mechanisms that operate 
> soely or partially in the information domain (like a design for a particular 
> exection structure for partitioned convolution) -- in spite of the fact that 
> the information domain is now intimately interfaced as an active participant 
> in much human (economic/industrial) activity?
> 
> I can understand Knuth's criticism of the futility of trying to distinguish 
> between numerical and abstract-structural patentable concepts. But I can't 
> understand how you can equate _functional_ information structures (whether 
> algorithm or mathematical theorem) performing in an active role as an 
> executing computer program  with all other "ideas" and say "sorry, that's 
> off limits, not patentable." Given the intent of the patent system to grant 
> monopoly rights over novel inventions I fail to see see how that's a valid 
> distinction to draw unless your real objection is to all patents and you're 
> just trying to keep them out of the software domain (and that is another 
> argument entirely).
> 
> Much human activity is now conducted in the world of bits and bytes. 
> Algorithms are functional mechanisms that operate in the world of bits and 
> bytes. Why shouldn't they be patentable? Simply saying "because they are 
> ideas" isn't an argument on its own. Why should we distinguish between a 
> mechanism that performs partitioned convolution by juggling coloured marbles 
> and one that performs partitioned convolution by switching bits?
> 
> A patent doesn't prohibit you from having the idea, thinking about an 
> algorithm, or using the patented thing in research (these are other common 
> things you do with "ideas"). I'm pretty sure you can also write books about 
> patented things, build new theories upon them, etc. A software patent does 
> place restrictions on use of that idea in its role as a concrete functional 
> information mechanism (e.g. in a computer system).
> 
> I'm beginning to think that your previously stated moral objections are more 
> concerned with the whole notion and structure of intellectual property as a 
> legal construct than they are with software patents in particular -- would 
> that be a reasonable characterisation? In that light a lot of your previous 
> statements make a lot more sense to me.
> 
> Ross.
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] damn patents (was New patent application on uniformly partitioned convolution) [OT]

2011-01-31 Thread Andy Farnell


On Sun, 30 Jan 2011 17:51:23 +1100
"Ross Bencina"  wrote:


> (Subject changed)

Apologies to those searching archives.

> _All_ patents are intellectual property 

That is true.

> (designs, ideas, inventions, whatever you 
> want to call them) whether they apply to software algorithms, mechanical 
> mechanisms, chemical processes etc.

No. Here is where you go wrong. You are launching from a false premise 
that assumes my objection. What I want to call them is significant.
You can't dismiss that with "whatever". Different words have 
different meanings, and different meanings require different interpretations.

AXIOM: Ideas should not be patentable. Period. 

Do I need to explain this? 


On Sun, 30 Jan 2011 17:51:23 +1100
"Ross Bencina"  wrote:

> Hi Andy
> 
> I wish I were worthy of quoting Blaise Pascal here, but instead I will just 


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-31 Thread Andy Farnell

Hey Nigel,

Spirally cut ham, hmmm. Now if I just had spirally cut cheese
and a spirally cut loaf...  :)

Well done to that man. Nothing beats "just going out and doing it". 

The syndrome in Case 2 is popular with kids when one
screws up their face at ice-cream... until he sees his brother
enjoying one, now all of sudden he wants one too.

Deferential, approval seeking, risk averse behaviour 
 =  colloquially "bottom feeding".

On Sat, 29 Jan 2011 14:54:28 -0800
Nigel Redmon  wrote:

> Good point, Andy. And I've heard historical examples of how 

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-29 Thread Andy Farnell

In a way I think this is the more interesting case Nigel, even
though it is less "weighty" than the medical example.

Were it not for the fact that one look at the
thing would reveal its entirety, the small development
company could always write a contract with the big
distributer. 

In a search for alternatives to patents and paying lawyers $40k
for a process taking years, can't we imagine an "evaluation" 
process involving some kind of NDA's, trusted expert agents
acting for both parties?

a.



On Sat, 29 Jan 2011 13:36:13 -0800
Nigel Redmon  wrote:

> Now, on the less important-to-society front: A company comes up with an idea 
> for tactile feedback in video games. This sort of thing is immediately 
> obvious once demonstrated, so trade secrets are of no use. There is nothing 
> especially difficult about this technique they developed, but they did come 
> up with a really successful application of the idea, because it's cheap to 
> implement, is an easy feature for game developers to include support for, and 
> game players take to it right away and will pay extra for controllers using 
> it.
> 
> Video games machines these days are controlled by a few large players, and if 
> they want to incorporated this technique themselves, the original developing 
> company would get nothing. Of course, some would consider this as 
> unimportant, and having nothing to do with progress in the greater sense, but 
> others might consider this legally-aided fairness that would otherwise 
> probably not play out fairly in the market place.

-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-29 Thread Andy Farnell



Hi Ross

I think it has a bearing on all of us too. And thus you lure
me in. But if people complain that this is getting boring, 
off-topic or ill-natured then let's quit it. 

All I can offer you is my opinions, I can't help you with
your misunderstandings about the nature of reality. A study of
Shannon and Bohr might help you disentangle information
from atoms.

Straight up, I'm confused as to whether you support software patents
(if you want me to correct your misunderstandings) or whether
you want to help reform them, which implies that you don't.
At the risk of falling off your pragmatic fence perhaps you
could lean over enough for me to see which way you're pitching.

Legal arguments do not interest me since that just begs the question.
The problem _is_ a legal interpretation, therefore the Law sets the
conditions for what counts as a fact. I am not interested in trying
to use either reasoned or moral arguments on that wonky playing field.
I know how that games goes, it's like arguing with creationists.

That we live in a complex world is not sufficient reason for me
to abandon moral principles, or aspire to creating a less complex
and fairer world through reason. Simplicity suits me fine as
a computer scientist, it is beauty and elegance, and I am always
suspicious of those who muddy the waters to appear deep or
complicate things to make themselves necessary.

As you say I do not agree with the socio-economic argument (as you 
attribute to Nigel). I think it's a worn out old straw man. I do 
agree with the incompetence of the examiners, but that is secondary
to my objection to software patents on moral grounds.

I despise argument by appeal to authority, but there is no greater
living authority on computer programs than this man and I urge
you to read this letter, and perhaps respond to it with a a little 
less levity than you did to the last link I gave to help you.

http://www.pluto.it/files/meeting1999/atti/no-patents/brevetti/docs/knuth_letter_en.html

If you have no hope to correct the patent system then you must
surely abandon hope of saving rainforests. They are the same. They
are both about land grabs; the appropriation of what belongs to everybody
by a group for themselves whatever the cost to humanity at large, based 
upon some perceived right backed up by violence. It is not acceptable in 
the 21st century. Well, it was never acceptable, but now the stakes are possibly
the future of the species itself. You cannot selectively fight injustices, 
you must fight all injustice, which means having principles. Pragmatism 
won't do. Pragmatism is _why_ we are in this shit. In my humble opinion.

Andy


On Sun, 30 Jan 2011 02:43:26 +1100
"Ross Bencina"  wrote:

> Hi Andy
> 
> Andy Farnell wrote:
> > I don't want to open up a lengthy OT debate here. But
> > will reply privately to address some of your points in detail.
> 
> Fair enough. I guess the main reason I bought into this conversation is that 
> I do feel like it's something that affects all of us here and I'm interested 


-- 
Andy Farnell 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


  1   2   >