[music-dsp] http://musicdsp.org/

2018-11-27 Thread Thomas Young
http://musicdsp.org/ seems to be down, does anyone know if the webmaster can be 
contacted to fix it?

CONFIDENTIALITY NOTICE: This e-mail message (including any attachments) is for 
the sole use of the intended recipient and may contain confidential, privileged 
and/or trade secret information. Any unauthorized review, use, disclosure or 
distribution is prohibited. If you are not the intended recipient, please 
contact the sender by reply e-mail and destroy all copies of the original 
message.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] [admin] list etiquette

2015-08-28 Thread Thomas Young
Thank you Douglas. Some of the toxic replies here have really harmed the 
friendly spirit and approachability of this mailing list.


-Original Message-
From: music-dsp [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of 
Douglas Repetto
Sent: 22 August 2015 16:22
To: A discussion list for music-related DSP
Subject: [music-dsp] [admin] list etiquette

Hi everyone, Douglas the list admin here.

I've been away and haven't really been monitoring the list recently.
It's been full of bad feelings, unpleasant interactions, and macho posturing. 
Really not much that I find interesting. I just want to reiterate a few things 
about the list.

I'm loathe to make or enforce rules. But the list has been pretty much useless 
for the majority of subscribers for the last year or so. I know this because 
many of them have written to complain. It's certainly not useful to me.

I've also had several reports of people trying to unsubscribe other people and 
other childish behavior. Come on.

So:

* Please limit yourself to two well-considered posts per day. Take it off list 
if you need more than that.
* No personal attacks. I'm just going to unsub people who are insulting. Sorry.
* Please stop making macho comments about "first year EE students know this" 
and blahblahblah. This list is for anyone with an interest in sound and dsp. No 
topic is too basic, and complete beginners are welcome.

I will happily unsubscribe people who find they can't consistently follow these 
guidelines.

The current list climate is hostile and self-aggrandizing. No beginner, gentle 
coder, or friendly hobbyist is going to post to such a list. If you can't help 
make the list friendly to everyone, please leave. This isn't the list for you.


douglas

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] [admin] another HTML test

2013-11-11 Thread Thomas Young
Thanks Doug :)



-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of douglas repetto
Sent: 11 November 2013 17:19
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] [admin] another HTML test





Okay, it seems to be working. You can now post to the list in HTML but your 
message will be converted to plain text. This seems like a good first step to 
accommodate people who need plain text and people for whom sending plain text 
is a pain. I tried to respond to some message from my phone while I was out of 
town and found that it's impossible to send plain text email that way!



best,

douglas









On 11/11/13 12:16 PM, douglas repetto wrote:

>

> Hi Dave,

>

> I have the list set to convert HTML mail to plain text. So it's a good

> sign that the email went through. I'm composing this in red text and

> changing fonts around. That should all disappear when this message

> arrives...

>

> Here's a list:

>

> 1. * taco

> 2. * truck

>

>

>

> Wheee!

>

>

>



--

... http://artbots.org 
.douglas.irving http://dorkbot.org 
.. http://music.columbia.edu/cmc/music-dsp

...repetto. http://music.columbia.edu/organism

... http://music.columbia.edu/~douglas





--

dupswapdrop -- the music-dsp mailing list and website:

subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links

http://music.columbia.edu/cmc/music-dsp

http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] family of soft clipping functions.

2013-10-29 Thread Thomas Young
This reminds me of experimenting with polynomials as an amplitude enveloping 
function for a soft synthesiser. There was something rather alluring about the 
idea of a one-line-of-code amplitude envelope - unfortunately it made creating 
the envelopes pretty tiresome, and when I thought about the performance I 
realised it was probably slower to do that maths than the few conditionals you 
would use for a standard piecewise envelope. So basically a rubbish idea unless 
you have some strange need to have your amplitude envelope be a single line of 
code :I


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 29 October 2013 01:56
To: music-dsp@music.columbia.edu
Subject: [music-dsp] family of soft clipping functions.


at the last AES in NYC, i was talking with some other folks (that likely hang 
out here, too) about this family of soft clipping curves made outa polynomials 
(so you have some idea of how high in frequency any generated images will 
appear).

these are odd-order, odd-symmetry polynomials that are monotonic from -1 < x < 
+1 and have as many continuous derivatives as possible at +/- 1 where these 
curves might be spliced to constant-valued rails.

the whole idea is to integrate the even polynomial  (1 - x^2)^N

 x
  g(x)  =  integral{ (1 - v^2)^N dv}
 0

you figger this out using binomial expansion and integrating each power term.

normalize g(x) with whatever g(1) is so that the curve is g(x)/g(1) and splice 
that to two constant functions for the rails

   { -1x <= -1
   {
  f(x)  =  { g(x)/g(1)   -1 <= x <= +1
   {
   { +1  +1 <= x


you can "hard limit" (at +/- 1) before passing through this soft clipper and it 
still works fine.  but it has some gain in the "linear" region which is g(0).


if you want to add some "even harmonic distortion" to this, add a little bit of

 (1 - x^2)^M

to f(x)  for |x| < 1 and it's still smooth everywhere, but there is a little DC 
added (which has to be the case for even-symmetry distortion). 
  M does not have to be the same as N and i wouldn't expect it to be.



you can think of f(x) as a smooth approximation the "sign" or "signum" 
function

   sgn(x)  =   lim   f(a*x)
  a -> +inf

or

   sgn(x)  =   lim   f(x)
  N -> +inf

which motivates using this as a smooth approximation of the sgn(x) 
function as an alternative to
   (2/pi)*arctan(a*x) or tanh(a*x).  from the sgn(x) function, you can 
create smooth versions of the unit step function and use that for 
splicing.  as many derivatives are continuous in the splice as possible. 
  and it satisfies conditions of symmetry and complementarity that are 
useful in our line of work.

  u(x)  =  1/2 * (1 + sgn(x))  =approx   1/2*(1 + f(x))

you can run a raised cosine (Hann) through this and get a more flattened 
Hann.  in some old stuff i wrote, i dubbed this window:


  w(x)  =  1/2  +  (9/16)*cos(pi*x) - (1/16)*cos(3*pi*x)


as the "Flattened Hann Window" but long ago Carla Scaletti called it the 
"Bristow-Johnson window" in some Kyma manual.  i don't think it deserves 
that label (i've seen that function in some wavelet/filterbank lit since 
for half-band filters).  you get that window by running a simple Hann 
through the biased waveshaper:

1/2 * ( 1 + f(2x-1) )

with N=1.  you will get an even more pronounced effect (of smoothly 
flattening the Hann) with higher N.

below is a matlab file that demonstrates this as a soft clipping function.

BTW, Olli N, this can be integrated with that "splicing theory" thing we 
were talking about approximately a year ago.  it would define the 
"odd-symmetry" component.  we should do an AES paper about this.  i 
think now there is nearly enough "meat" to make a decent paper.  before 
i didn't think so.




  FILE:  softclip.m

line_color = ['g' 'c' 'b' 'm' 'r'];

figure;
hold on;

x = linspace(-2, 2, 2^18 + 1);

x = max(x, -1); % hard clip for |x|>  1
x = min(x, +1);

for N = 0:10

n = linspace(0, N, N+1);

a = (-1).^n ./ (factorial(N-n) .* factorial(n) .* (2*n + 1));

a = a/sum(a);

y = x .* polyval(fliplr(a), x.^2);  % stupid MATLAB puts the coefs 
in the wrong order

plot(x, y, line_color(mod(N,length(line_color))+1) );

end

hold off;





have fun with it, if you're so inclined.

L8r,

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/m

Re: [music-dsp] "Analog" 24dB lowpass filter emulation perfected. (Roland, Oberheim etc)

2013-10-28 Thread Thomas Young
Interesting, thanks

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Ove Karlsen
Sent: 28 October 2013 11:57
To: music-dsp@music.columbia.edu
Subject: [music-dsp] "Analog" 24dB lowpass filter emulation perfected. (Roland, 
Oberheim etc)

Peace Be With You.

I have perfected the "analog" 24dB lowpass filter in digital form.

It is a generalized and fast implementation without obscurities. It takes from 
analog simply what is good, and otherwise is a perfect implementation one would 
expect from a digital filter.

I wrote about it here: http://ovekarlsen.com/Blog/abdullah-filter/

Interested should read, and there is a soundexample.

Peace Be With You.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] more stupid questions

2013-06-17 Thread Thomas Young
- Make a new project (File->New Project)
- Select "Empty Project" under 'Visual C++'
- In the project properties go to Linker->SubSystem and pick "Console" (you can 
also select posix)
- Add a new cpp/c file to the project
- Write your c/c++

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Sampo Syreeni
Sent: 14 June 2013 19:50
To: music-dsp list
Subject: [music-dsp] more stupid questions

Tell me, is there a way to make Visual Studio behave like standard C/C++ under 
POSIX, under Windows? Like, that you just get a normal main() and the normal 
libraries in the normal way?

I mean, I'm more than capable and *much* more than willing to do it the 
conventional way, no matter how far into K&R it reaches, but for some reason 
Visual Studio with its Win32 libraries doesn't play game. It just fucking won't 
give me even a standard main() prototype and it insists on weird Windows 
specific inclusions from the start.

If you can circumvent that stuff, how do you do that? And if you can't, how 
precisely do they get to claim they're even *half* POSIX compliant? 
(It'd help to know the precise rationale, because their explanation would 
prolly spell out all of the stuff I'm asking about, here.)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] My latest computer DSP signal pathfor audioimprovements

2013-05-03 Thread Thomas Young
> apparently that is more emotional and personal for some people to be able to 
> neutrally communicate about

This from the man who wants to recapture the sound of records from his lost 
youth :P

I think Sampo had a lot of good points personally. I would not dismiss the 
value of simplification, your existing audio setup is bafflingly complicated.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 03 May 2013 00:12
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] My latest computer DSP signal pathfor audioimprovements

Sampo Syreeni wrote:
> On 2013-05-02, Rob Belcham wrote:
>
>> Agreed. There is a lot more low end in the processed but the loss of 
>> the high frequencies is distracting.
> ...


Well, I wanted to make clear modern mixes on consumer materials are often 
messed up, and discuss that. I f your monitoring makes all the CD and well 
known blurays and such sound fine to your ears, fine.

I'm heavy enough edified univ. EE with serious theoretical physics knowledge, 
and well developed practical skills, so I don't feel I need new theories, when 
the 1d and 2d year Electrical Engineering university years offer a complete, 
mathematically closing, and theoretically accurate theory for electrical 
network theory, signal processing, information theory and such. I mean I know 
what an FFT will do with a "normal" electrical signal transient, what poles and 
zeros are and do, and what the difference is between a control loop and a 
filter+delay, and that such systems have certain mathematical properties.

I don't know about you, but I had absolutely great records and radio to listen 
to as a kid and teenager, and I can't stand the garbage that nowadays is made 
of the materials of good, serious musicians (which I am too), so that's what I 
wanted to discuss, and to an extend offer solutions for, but apparently that is 
more emotional and personal for some people to be able to neutrally communicate 
about.

You mathematically oriented language doesn't do justice to the basis of EE, 
which is much harder than you appear to think, those different signal paths can 
be followed I think by audio engineers with some experience (and have to an 
extend), and there is such thing as "loudness" perception and studio processing 
that can work with it. So apart from some "new" boys and girls wanting to use 
music for power instead of enjoyment, there's a whole world of studio 
processing, and some of that I'm quite accurately aware of, so maybe that's 
interesting for some people to hear about.

I suppose if I'd put a schematic of a digital TV before you, you might be 
tempted to think it's a mess, but hey...

T.V.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] basic trouble with signed and unsigned types

2013-05-02 Thread Thomas Young
As someone pointed out what you really want here is c++ templates, those 
compile out completely and can be specialised for different types - which you 
need for speed.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Sampo Syreeni
Sent: 01 May 2013 23:08
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] basic trouble with signed and unsigned types

On 2013-05-01, Bjorn Roche wrote:

> Not sure I completely understand what you are after, but I think most 
> audio folks don't use types like int, long and short, but rather types 
> like int32_t from types.h (or sys/types.h). There are various 
> guarantees made about the sizes of those types that you can look up.

Correct, and for now I'm doing something similar by calculating all of my 
widths, maximum values and whatnot beforehand into #defines. In the end I'd 
like an implementation that is fully parametrizable, though, and with as few 
adjustable parameters as at all possible. That's because while the idea I'm 
trying to put into code is primarily useful wrt audio, the basic framework of 
dithering I'm working within admits plenty of environments beyond *just* that. 
So, ideally, I'd want to make the code "just work", even within certain highly 
unusual environments.

(Say, you receive a single bit integer type from the outside which you have to 
use to process your samples; there's way to go towards that but it might 
eventually happen. And, by the way, if you've ever done anything with DSD/SACD, 
treating the 1-bit channel as a modulo-linear one actually seems to make it too 
perfectly ditherable, breaking the Lipshitz and Vanderkooy critique. Thus, 
audio relevance even here. :)

What I'm after is ways of reimplementing at least right shifts for unsigned 
quantities within standard C99, so that my new operator is fully portable and 
guaranteedly implements right shifts without sign extension, regardless of what 
integer type you use them on, and without any parametric polymorphism or 
reference to built-in constants describing your types at the meta level.

There are bound to be other problems I have to worry about in the future, like 
how to deal with the possibility of one's complement arithmetic where I'm now 
taking parities. And whatnot. But at least the right shift would help me a lot 
right now.

> Also, I assume you've given C++ templates and operator overloading 
> consideration.

Yes. Immediately after I wanted to die. ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-50-5756111, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] My latest computer DSP signal path for audioimprovements

2013-05-01 Thread Thomas Young
There is quite an audible loss of high end, it's especially noticeable on the 
'double processed' example which sounds very low passed. Is that intentional?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 01 May 2013 17:57
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] My latest computer DSP signal path for 
audioimprovements

Rob Belcham wrote:
> Hi Theo,
>
> Some wet / dry example audio would be interesting to hear.
>
> Cheers
> Rob
> ..

OK, that should be reasonable.

For this limited club (little over a hundred world wide ?) of people I suppose 
there's no real problem with a few very short (7 secs) examples from varied 
musical sources:

http://www.theover.org/Musicdspexas/dsptst1.mp3

BEWARE: it's a nice few examples, but it isn't produced very nicely because I 
forgot to put off a 10 dB boost+perfect limiter (1.2 Sec) in the recording...

I didn't want to start the whole startup and mic mixing again, so beware it 
sounds a bit like radio-compressed, but the idea is still ok-ish.

For people who are interested in playing/learning/??? with my actual signal 
path, I by now have a complete set of tcl scripts to start the particular chain 
settings up, completely automatically, with settings from the last example, and 
all the jack-rack and other files, which I can make downloadable per 
individual. You'll need a strong Linux machine with a 192kHz soundcard and the 
ladspa plugins, jamin and jack installed.

Greetings

  Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] My latest computer DSP signal path for audioimprovements

2013-05-01 Thread Thomas Young
Link doesn't appear to be working

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 01 May 2013 17:57
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] My latest computer DSP signal path for 
audioimprovements

Rob Belcham wrote:
> Hi Theo,
>
> Some wet / dry example audio would be interesting to hear.
>
> Cheers
> Rob
> ..

OK, that should be reasonable.

For this limited club (little over a hundred world wide ?) of people I suppose 
there's no real problem with a few very short (7 secs) examples from varied 
musical sources:

http://www.theover.org/Musicdspexas/dsptst1.mp3

BEWARE: it's a nice few examples, but it isn't produced very nicely because I 
forgot to put off a 10 dB boost+perfect limiter (1.2 Sec) in the recording...

I didn't want to start the whole startup and mic mixing again, so beware it 
sounds a bit like radio-compressed, but the idea is still ok-ish.

For people who are interested in playing/learning/??? with my actual signal 
path, I by now have a complete set of tcl scripts to start the particular chain 
settings up, completely automatically, with settings from the last example, and 
all the jack-rack and other files, which I can make downloadable per 
individual. You'll need a strong Linux machine with a 192kHz soundcard and the 
ladspa plugins, jamin and jack installed.

Greetings

  Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] meassuring the difference

2013-03-07 Thread Thomas Young
Your mean square error procedure is slightly incorrect. You should take the 
final signals from both processes, say A[1..n] and B[1..n], subtract them to 
get your error signal E[1..n], then the mean square error is the sum of the 
squared error over n.

Sum( E[1..n]^2 ) / n

This (MSE) is a statistical approach though and isn't necessarily a great way 
of measuring perceived acoustical differences.

It depends on the nature of your signal but you may want to check the error in 
the frequency domain (weighted to specific frequency band if appropriate) 
rather than the time domain.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of volker böhm
Sent: 07 March 2013 15:10
To: A discussion list for music-related DSP
Subject: [music-dsp] meassuring the difference

dear all,

i'm trying to meassure the difference between two equivalent but not identical 
processes.
right now i'm feeding some test signals to both algorithms at the same time and 
subtract the output signals.

now i'm looking for something to quantify the error signal.
from statistics i know there is something like the "mean squared error".
so i'm squaring the error signal and take the (running) average.

mostly i'm getting some numbers very close to zero and a gut feeling tells me i 
want to see those on a dB scale.
so i'm taking the logarithm and multiply by 10, as i have already squared the 
values before.
(as far as i can see, this is equivalent to a RMS meassurement).

is there a correct/better/preferred way of doing this?

next to a listening test, in the end i want to have a simple meassure of the 
difference of the two processes which is close to our perception of the 
difference. does that make sense?

thanks for any comments,
volker.



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Thesis topic on procedural-audio in video games?

2013-03-05 Thread Thomas Young
Excellent candidates for procedural audio in games:

 - Rain

That is all.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Alan Wolfe
Sent: 05 March 2013 16:24
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Thesis topic on procedural-audio in video games?

Howdy!

I think kkrieger, the 96KB first person shooter uses procedural audio:
http://www.youtube.com/watch?v=KgNfqYf_C_Q

i work in games myself and was recently talking to an audio engineer (the non 
programming type of engineer) who has a couple decades of experience about 
procedural sound effects.  He was saying that he hasn't seen anything that 
great in this respect other than footstep sounds and sometimes explosions.  If 
you think about it, that makes a lot of sense because even in synthesized music 
(or MIDI let's say), the only stuff that really sounds that realistic is 
percussion.

That being said, FM synthesis is kind of magical and can make some interesting 
and even realistic sounds :P

my 2 cents!

On Tue, Mar 5, 2013 at 1:35 AM, David Olofson  wrote:
> Well, I've been doing a bit of that (pretty basic stuff so far, aiming 
> at a few kB per song) - but all I have is code; two Free/Open Source 
> engines; that are used in two games (all music and sound effects) I'm 
> working on. No papers, and unfortunately, not much documentation yet 
> either.
>
>
> Audiality
> (The latest release is not currently online, though the unnamed 
> "official" version is part of Kobo Deluxe: http://kobodeluxe.com/)
>
> The old Audiality is all off-line modular synthesis and a simple 
> realtime sampleplayer driven by a MIDI sequencer. No "samples" - it's 
> all rendered at load time from a few kB of scripts.
>
> Gameplay video of Kobo Deluxe (sound effects + music):
> http://youtu.be/C9wO_T_fOvc
>
> Some ancient Audiality examples (the latter two from Kobo Deluxe):
> http://olofson.net/music/a1-atrance2.mp3
> http://olofson.net/music/a1-trance1.mp3
> http://olofson.net/music/a1-ballad1.mp3
>
>
> ChipSound/Audiality 2
> http://audiality.org/
>
> Audiality 2 is a full realtime synth with subsample accurate 
> scripting. The current version has modular voice structures (resonant 
> filters, effects etc), but the Kobo II songs so far are essentially
> 50-100 voice chip music, using only basic geometric waveforms and 
> noise - mono, no filters, no effects, no samples, nothing 
> pre-rendered.
>
> ChipSound/Audiality 2 (the Kobo II tracks and the A2 jingle):
> https://soundcloud.com/david-olofson
>
>
> David
>
> On Tue, Mar 5, 2013 at 9:08 AM, Danijel Domazet 
>  wrote:
>> Hi mdsp,
>> We need a masters thesis topic on procedural audio in video games. 
>> Does anyone have any good ideas? It would be great if we could 
>> afterwards continue developing this towards a commercial products.
>>
>> Any advice most welcome.
>>
>> Thanks!
>>
>> Danijel Domazet
>> LittleEndian.com
>>
>>
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book 
>> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>
>
>
>
> --
> //David Olofson - Consultant, Developer, Artist, Open Source Advocate
>
> .--- Games, examples, libraries, scripting, sound, music, graphics ---.
> |   http://consulting.olofson.net  http://olofsonarcade.com   |
> '-'
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-07 Thread Thomas Young
Ah, I wondered what you meant about those parenthesis!

The better efficiency comes from avoiding all the divisions (I don't normalise 
out a0 in my implementation) - it's a pretty trivial performance difference in 
reality but the coefficient equations are a little bit neater as well - less of 
those confusing parenthesis ;)
 
-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 04 January 2013 20:59
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

On 1/4/13 1:29 PM, Thomas Young wrote:
> Er.. yes sorry I transcribed it wrong, well spotted
>
> a1: ( - 2 * cos(w0) ) * A^2
>

ooops!  i'm embarrassed!  i was thinking it was -2 + cos(), sorry!

can't believe it.  chalk it up to being a year older and losing one year's 
portion of brain cells.

> So yea it's the same, I just rearranged it a bit for efficiency

dunno what's more efficient before normalizing out a0, but if it's after, you 
change only the numerator feedforward coefs and you would multiply them by the 
reciprocal of what the linear peak gain, 1/A^2.

sorry, that parenth thing must be early onset alzheimer's.

bestest,

r b-j

> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
> bristow-johnson
> Sent: 04 January 2013 18:25
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
> response
>
> On 1/4/13 1:11 PM, Thomas Young wrote:
>>> someone tell me what it was about
>> In a nutshell...
>>
>> Q: What is the equation for the coefficients of a peaking EQ biquad filter 
>> with constant 0 dB peak gain?
>>
>> A: This (using cookbook variables):
>>
>> b0: 1 + alpha * A
>> b1: -2 * cos(w0)
>> b2: 1 - alpha * A
>> a0: A^2 + alpha * A
>> a1: - 2 * cos(w0) * A^2<--- are sure you don't need pareths here?
>> a2: A^2 - alpha * A
>>
>> dragged over a lot of emails ;)
> yeah, i guess that's the same as
>
> b0: (1 + alpha * A)/A^2
> b1: (-2 * cos(w0)/A^2
> b2: (1 - alpha * A)/A^2
> a0: 1 + alpha / A
> a1: -2 * cos(w0)
> a2: 1 - alpha / A
>
> if you put in the parenths where you should, i think these are the same.
>
> r b0j
>
>> -Original Message-
>> From: music-dsp-boun...@music.columbia.edu
>> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
>> bristow-johnson
>> Sent: 04 January 2013 17:58
>> To: A discussion list for music-related DSP
>> Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
>> response
>>
>>
>>
>> looks like i came here late.  someone tell me what it was about.
>> admittedly, i didn't completely understand from a cursory reading.
>>
>> the only difference between the two BPFs in the cookbook is that of a 
>> constant gain factor.  in one the peak of the BPF is always at zero dB.
>> in the other, if you were to project the asymptotes of the "skirt" of the 
>> freq response (you know, the +6 dB/oct line and the -6 dB/oct line), they 
>> will intersect at the point that is 0 dB and at the resonant frequency.  
>> otherwise same shape, same filter.
>>
>> the peaking EQ is a BPF with gain A^2 - 1 with the output added to a wire.  
>> and, only on the peaking EQ, the definition of Q is fudged so that it 
>> continues to be related to BW in the same manner and so the cut response 
>> exactly undoes a boost response for the same dB, same f0, same Q.  nothing 
>> more to it than that.  no Orfanidis subtlety.
>>
>> if the resonant frequency is much less than Nyquist, then there is even 
>> symmetry of magnitude about f0 (on a log(f) scale) for BPF, notch, APF, and 
>> peaking EQ.  for the two shelves, it's odd symmetry about f0 (if you adjust 
>> for half of the dB shelf gain).  the only difference between the high shelf 
>> and low shelf is a gain constant and flipping the rest of the transfer 
>> function upside down.  this is the case no matter what the dB boost is or 
>> the Q (or "S").  if f0 approaches Fs/2, then that even or odd symmetry gets 
>> warped from the BLT and ain't so symmetrical anymore.
>>
>> nothing else comes to mind.
>>
>


-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
Er.. yes sorry I transcribed it wrong, well spotted

a1: ( - 2 * cos(w0) ) * A^2

So yea it's the same, I just rearranged it a bit for efficiency

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 04 January 2013 18:25
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

On 1/4/13 1:11 PM, Thomas Young wrote:
>> someone tell me what it was about
> In a nutshell...
>
> Q: What is the equation for the coefficients of a peaking EQ biquad filter 
> with constant 0 dB peak gain?
>
> A: This (using cookbook variables):
>
> b0: 1 + alpha * A
> b1: -2 * cos(w0)
> b2: 1 - alpha * A
> a0: A^2 + alpha * A
> a1: - 2 * cos(w0) * A^2<--- are sure you don't need pareths here?
> a2: A^2 - alpha * A
>
> dragged over a lot of emails ;)

yeah, i guess that's the same as

b0: (1 + alpha * A)/A^2
b1: (-2 * cos(w0)/A^2
b2: (1 - alpha * A)/A^2
a0: 1 + alpha / A
a1: -2 * cos(w0)
a2: 1 - alpha / A

if you put in the parenths where you should, i think these are the same.

r b0j

> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
> bristow-johnson
> Sent: 04 January 2013 17:58
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
> response
>
>
>
> looks like i came here late.  someone tell me what it was about.
> admittedly, i didn't completely understand from a cursory reading.
>
> the only difference between the two BPFs in the cookbook is that of a 
> constant gain factor.  in one the peak of the BPF is always at zero dB.
> in the other, if you were to project the asymptotes of the "skirt" of the 
> freq response (you know, the +6 dB/oct line and the -6 dB/oct line), they 
> will intersect at the point that is 0 dB and at the resonant frequency.  
> otherwise same shape, same filter.
>
> the peaking EQ is a BPF with gain A^2 - 1 with the output added to a wire.  
> and, only on the peaking EQ, the definition of Q is fudged so that it 
> continues to be related to BW in the same manner and so the cut response 
> exactly undoes a boost response for the same dB, same f0, same Q.  nothing 
> more to it than that.  no Orfanidis subtlety.
>
> if the resonant frequency is much less than Nyquist, then there is even 
> symmetry of magnitude about f0 (on a log(f) scale) for BPF, notch, APF, and 
> peaking EQ.  for the two shelves, it's odd symmetry about f0 (if you adjust 
> for half of the dB shelf gain).  the only difference between the high shelf 
> and low shelf is a gain constant and flipping the rest of the transfer 
> function upside down.  this is the case no matter what the dB boost is or the 
> Q (or "S").  if f0 approaches Fs/2, then that even or odd symmetry gets 
> warped from the BLT and ain't so symmetrical anymore.
>
> nothing else comes to mind.
>


-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
> someone tell me what it was about

In a nutshell...

Q: What is the equation for the coefficients of a peaking EQ biquad filter with 
constant 0 dB peak gain?

A: This (using cookbook variables):

b0: 1 + alpha * A
b1: -2 * cos(w0)
b2: 1 - alpha * A
a0: A^2 + alpha * A
a1: - 2 * cos(w0) * A^2
a2: A^2 - alpha * A

dragged over a lot of emails ;)

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 04 January 2013 17:58
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response



looks like i came here late.  someone tell me what it was about.  
admittedly, i didn't completely understand from a cursory reading.

the only difference between the two BPFs in the cookbook is that of a constant 
gain factor.  in one the peak of the BPF is always at zero dB.  
in the other, if you were to project the asymptotes of the "skirt" of the freq 
response (you know, the +6 dB/oct line and the -6 dB/oct line), they will 
intersect at the point that is 0 dB and at the resonant frequency.  otherwise 
same shape, same filter.

the peaking EQ is a BPF with gain A^2 - 1 with the output added to a wire.  
and, only on the peaking EQ, the definition of Q is fudged so that it continues 
to be related to BW in the same manner and so the cut response exactly undoes a 
boost response for the same dB, same f0, same Q.  nothing more to it than that. 
 no Orfanidis subtlety.

if the resonant frequency is much less than Nyquist, then there is even 
symmetry of magnitude about f0 (on a log(f) scale) for BPF, notch, APF, and 
peaking EQ.  for the two shelves, it's odd symmetry about f0 (if you adjust for 
half of the dB shelf gain).  the only difference between the high shelf and low 
shelf is a gain constant and flipping the rest of the transfer function upside 
down.  this is the case no matter what the dB boost is or the Q (or "S").  if 
f0 approaches Fs/2, then that even or odd symmetry gets warped from the BLT and 
ain't so symmetrical anymore.

nothing else comes to mind.

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."





On 1/4/13 11:23 AM, Nigel Redmon wrote:
> Great!
>
> On Jan 4, 2013, at 2:40 AM, Thomas Young  wrote:
>
>> Aha, success! Multiplying denominator coefficients of the peaking filter by 
>> A^2 does indeed have the desired effect.
>>
>> Thank you very much for the help
>>
>> -Original Message-
>> From: music-dsp-boun...@music.columbia.edu 
>> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas 
>> Young
>> Sent: 04 January 2013 10:33
>> To: A discussion list for music-related DSP
>> Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
>> response
>>
>> Hi Nigel, which analogue prototype are you referring to when you suggest 
>> multiplying denominator coefficients by the gain factor, the peaking one?
>>
>> -Original Message-
>> From: music-dsp-boun...@music.columbia.edu 
>> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel 
>> Redmon
>> Sent: 04 January 2013 09:26
>> To: A discussion list for music-related DSP
>> Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
>> response
>>
>>> On 4/01/2013 4:34 AM, Thomas Young wrote:
>>>> However I was hoping to avoid scaling the output since if I have to 
>>>> do that then I might as well just change the wet/dry mix with the 
>>>> original signal for essentially the same effect and less messing 
>>>> about.
>> I read quickly this morning and missed this...with something like a lowpass, 
>> you do get irregularities, but with something like a peaking filter it stays 
>> pretty smooth when summing with the original signal (I guess because the 
>> phase change is smoother, with the second order split up between two 
>> halves). So that part isn't a problem, BUT...
>>
>> Consider a wet/dry mix...say you have a 6 dB peak that you want to move down 
>> to 0 dB (skirts down to -6 dB). OK, at 0% wet you have a flat line at 0 dB. 
>> At 100% wet you have your 6 dB peak again (skirt at 0 dB). At 50% wet, you 
>> have about a 3 dB peak, skirt still at 0 dB. There is no setting that will 
>> give you anything but the skit at 0 dB.
>>
>> Again, as Ross said earlier, you could have just an output gain-set it to 
>> 0.5 (-6 dB), and now you have your skirt at - 6 dB, peak at 0 dB. But a 
>> wet/dry mix know is not going to do it.
>>
>> Ross said:
>>> There is only a difference of scale factors between your constraints a

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
Aha, success! Multiplying denominator coefficients of the peaking filter by A^2 
does indeed have the desired effect.

Thank you very much for the help

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas Young
Sent: 04 January 2013 10:33
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

Hi Nigel, which analogue prototype are you referring to when you suggest 
multiplying denominator coefficients by the gain factor, the peaking one?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 04 January 2013 09:26
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

> On 4/01/2013 4:34 AM, Thomas Young wrote:
>> However I was hoping to avoid scaling the output since if I have to 
>> do that then I might as well just change the wet/dry mix with the 
>> original signal for essentially the same effect and less messing 
>> about.

I read quickly this morning and missed this...with something like a lowpass, 
you do get irregularities, but with something like a peaking filter it stays 
pretty smooth when summing with the original signal (I guess because the phase 
change is smoother, with the second order split up between two halves). So that 
part isn't a problem, BUT...

Consider a wet/dry mix...say you have a 6 dB peak that you want to move down to 
0 dB (skirts down to -6 dB). OK, at 0% wet you have a flat line at 0 dB. At 
100% wet you have your 6 dB peak again (skirt at 0 dB). At 50% wet, you have 
about a 3 dB peak, skirt still at 0 dB. There is no setting that will give you 
anything but the skit at 0 dB.

Again, as Ross said earlier, you could have just an output gain-set it to 0.5 
(-6 dB), and now you have your skirt at - 6 dB, peak at 0 dB. But a wet/dry mix 
know is not going to do it.

Ross said:
> There is only a difference of scale factors between your constraints and the 
> RBJ peaking filter constraints so you should be able to use them with minor 
> modifications (as Nigel suggests, although I didn't take the time to review 
> his result).
> 
> Assuming that you want the gain at DC and nyquist to be equal to your 
> stopband gain then this is pretty much equivalent to the RBJ coefficient 
> formulas except that Robert computed them under the requirement of unity gain 
> at DC and Nyquist, and some specified gain at cf. You want unity gain at cf 
> and specified gain at DC and Nyquist. This seems to me to just be a direct 
> reinterpretation of the gain values. You should be able to propagate the 
> needed gain values through Robert's formulas.

Actually, it's more than a reinterpretation of the gain values (note that no 
matter what gain you give it, you won't get anything like what Thomas is 
after). The poles are peaking the filter and the zeros are holding down the 
"shirt" (at 0 dB in the unmodified filter); obviously the transfer function is 
arranged to keep that relationship at any gain setting. So, you need to change 
it so that the gain is controlling something else-changing the relationship of 
the motion between the poles and zeros (the mod I gave does that).


On Jan 3, 2013, at 10:34 PM, Ross Bencina  wrote:

> Hi Thomas,
> 
> Replying to both of your messages at once...
> 
> On 4/01/2013 4:34 AM, Thomas Young wrote:
>> However I was hoping to avoid scaling the output since if I have to 
>> do that then I might as well just change the wet/dry mix with the 
>> original signal for essentially the same effect and less messing 
>> about.
> 
> Someone else might correct me on this, but I'm not sure that will get you the 
> same effect. Your proposal seems to be based on the assumption that the 
> filter is phase linear and 0 delay (ie that the phases all line up between 
> input and filtered version). That's not the case.
> 
> In reality you'd be mixing the phase-warped and delayed (filtered) signal 
> with the original-phase signal. I couldn't tell you what the frequency 
> response would look like, but probably not as good as just scaling the 
> peaking filter output.
> 
> On 4/01/2013 6:03 AM, Thomas Young wrote:
> > Additional optional mumblings:
> >
> > I think really there are two 'correct' solutions to manipulating 
> > only the coefficients to my ends (that is, generation of 
> > coefficients which produce filters interpolating from bandpass to flat):
> >
> > The first is to go from pole/zero to transfer function, basically as 
> > you (Nigel) described in your first message - stick the zeros in the 
> > centre, poles near 

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-04 Thread Thomas Young
Hi Nigel, which analogue prototype are you referring to when you suggest 
multiplying denominator coefficients by the gain factor, the peaking one?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 04 January 2013 09:26
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

> On 4/01/2013 4:34 AM, Thomas Young wrote:
>> However I was hoping to avoid scaling the output since if I have to 
>> do that then I might as well just change the wet/dry mix with the 
>> original signal for essentially the same effect and less messing 
>> about.

I read quickly this morning and missed this...with something like a lowpass, 
you do get irregularities, but with something like a peaking filter it stays 
pretty smooth when summing with the original signal (I guess because the phase 
change is smoother, with the second order split up between two halves). So that 
part isn't a problem, BUT...

Consider a wet/dry mix...say you have a 6 dB peak that you want to move down to 
0 dB (skirts down to -6 dB). OK, at 0% wet you have a flat line at 0 dB. At 
100% wet you have your 6 dB peak again (skirt at 0 dB). At 50% wet, you have 
about a 3 dB peak, skirt still at 0 dB. There is no setting that will give you 
anything but the skit at 0 dB.

Again, as Ross said earlier, you could have just an output gain-set it to 0.5 
(-6 dB), and now you have your skirt at - 6 dB, peak at 0 dB. But a wet/dry mix 
know is not going to do it.

Ross said:
> There is only a difference of scale factors between your constraints and the 
> RBJ peaking filter constraints so you should be able to use them with minor 
> modifications (as Nigel suggests, although I didn't take the time to review 
> his result).
> 
> Assuming that you want the gain at DC and nyquist to be equal to your 
> stopband gain then this is pretty much equivalent to the RBJ coefficient 
> formulas except that Robert computed them under the requirement of unity gain 
> at DC and Nyquist, and some specified gain at cf. You want unity gain at cf 
> and specified gain at DC and Nyquist. This seems to me to just be a direct 
> reinterpretation of the gain values. You should be able to propagate the 
> needed gain values through Robert's formulas.

Actually, it's more than a reinterpretation of the gain values (note that no 
matter what gain you give it, you won't get anything like what Thomas is 
after). The poles are peaking the filter and the zeros are holding down the 
"shirt" (at 0 dB in the unmodified filter); obviously the transfer function is 
arranged to keep that relationship at any gain setting. So, you need to change 
it so that the gain is controlling something else-changing the relationship of 
the motion between the poles and zeros (the mod I gave does that).


On Jan 3, 2013, at 10:34 PM, Ross Bencina  wrote:

> Hi Thomas,
> 
> Replying to both of your messages at once...
> 
> On 4/01/2013 4:34 AM, Thomas Young wrote:
>> However I was hoping to avoid scaling the output since if I have to 
>> do that then I might as well just change the wet/dry mix with the 
>> original signal for essentially the same effect and less messing 
>> about.
> 
> Someone else might correct me on this, but I'm not sure that will get you the 
> same effect. Your proposal seems to be based on the assumption that the 
> filter is phase linear and 0 delay (ie that the phases all line up between 
> input and filtered version). That's not the case.
> 
> In reality you'd be mixing the phase-warped and delayed (filtered) signal 
> with the original-phase signal. I couldn't tell you what the frequency 
> response would look like, but probably not as good as just scaling the 
> peaking filter output.
> 
> On 4/01/2013 6:03 AM, Thomas Young wrote:
> > Additional optional mumblings:
> >
> > I think really there are two 'correct' solutions to manipulating 
> > only the coefficients to my ends (that is, generation of 
> > coefficients which produce filters interpolating from bandpass to flat):
> >
> > The first is to go from pole/zero to transfer function, basically as 
> > you (Nigel) described in your first message - stick the zeros in the 
> > centre, poles near the edge of the unit circle and reduce their 
> > radii
> > - doing the maths to convert these into the appropriate biquad 
> > coefficients. This isn't really feasible for me to do in realtime 
> > though. I was trying to do a sort of tricksy workaround by lerping 
> > from one set of coefficients to another but on reflection I don't 
> > think there is any mathematical correctness ther

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Thomas Young
Thanks Nigel - I have just been playing around with the pole/zero plotter (very 
helpful app for visualising the problem) and thinking about it. You guys are 
probably right the simplest approach is just to scale the output and using the 
peaking filter.

Additional optional mumblings:

I think really there are two 'correct' solutions to manipulating only the 
coefficients to my ends (that is, generation of coefficients which produce 
filters interpolating from bandpass to flat):

The first is to go from pole/zero to transfer function, basically as you 
(Nigel) described in your first message - stick the zeros in the centre, poles 
near the edge of the unit circle and reduce their radii - doing the maths to 
convert these into the appropriate biquad coefficients. This isn't really 
feasible for me to do in realtime though. I was trying to do a sort of tricksy 
workaround by lerping from one set of coefficients to another but on reflection 
I don't think there is any mathematical correctness there.

The second is to have an analogue prototype which somehow includes skirt gain 
and take the bilinear transform to get the equations for the coefficients. I'm 
not really very good with the s domain either so I actually wouldn't know how 
to go about this, but it's what I was originally thinking of.

Thanks for the help

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 03 January 2013 18:48
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

Thomas-it's a matter of manipulating the A and Q relationships in the numerator 
and denominator of the peaking EQ analog prototypes. I'm not as good in 
thinking in the s domain as the z, so I'd have to plot it out and think-too 
busy right now, though it's pretty trivial. But just doing the gain adjustment 
to the existing peaking EQ, as Ross suggested, is trivial. Not much reason to 
go through the fuss unless you're concerned about adding a single multiply. (To 
add to the confusion, my peaking implementation is different for gain and 
boost, so that the EQ remains symmetrical, a la Zolzer).


On Jan 3, 2013, at 9:34 AM, Thomas Young  wrote:

>> I'm pretty sure that the BLT bandpass ends up with zeros at DC and 
>> nyquist
> 
> Yes I think this is essentially my problem, there are no stop bands per-se 
> just zeros which I was basically trying to lerp away - which I guess isn't 
> really the correct approach.
> 
> The solution you are proposing would work I believe; along the same lines 
> there is a different bandpass filter in the RBJCB which has a constant stop 
> band gain (or 'skirt gain' as he calls it) and peak gain for the passband - 
> so a similar technique would work there by scaling the output.
> 
> However I was hoping to avoid scaling the output since if I have to do that 
> then I might as well just change the wet/dry mix with the original signal for 
> essentially the same effect and less messing about. I feel in my gut there 
> must be some way to do it by just manipulating coefficients.
> 
> 
> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Ross 
> Bencina
> Sent: 03 January 2013 17:16
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat 
> response
> 
> On 4/01/2013 4:05 AM, Thomas Young wrote:
>> Is there a way to modify the bandpass coefficient equations in the 
>> cookbook (the one from the analogue prototype H(s) = s / (s^2 + s/Q +
>> 1)) such that the gain of the stopband may be specified? I want to be 
>> able
> 
> I'm pretty sure that the BLT bandpass ends up with zeros at DC and 
> nyquist so I'm not sure how you're going to define stopband gain in 
> this case :)
> 
> Maybe start with the peaking filter and scale the output according to your 
> desired stopband gain and then set the peak gain to give 0dB at the peak.
> 
> peakGain_dB = -stopbandGain_dB
> 
> (assuming -ve stopbandGain_dB).
> 
> Does that help?
> 
> Ross.
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:

Re: [music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Thomas Young
> I'm pretty sure that the BLT bandpass ends up with zeros at DC and nyquist

Yes I think this is essentially my problem, there are no stop bands per-se just 
zeros which I was basically trying to lerp away - which I guess isn't really 
the correct approach.

The solution you are proposing would work I believe; along the same lines there 
is a different bandpass filter in the RBJCB which has a constant stop band gain 
(or 'skirt gain' as he calls it) and peak gain for the passband - so a similar 
technique would work there by scaling the output.

However I was hoping to avoid scaling the output since if I have to do that 
then I might as well just change the wet/dry mix with the original signal for 
essentially the same effect and less messing about. I feel in my gut there must 
be some way to do it by just manipulating coefficients.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Ross Bencina
Sent: 03 January 2013 17:16
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Lerping Biquad coefficients to a flat response

On 4/01/2013 4:05 AM, Thomas Young wrote:
> Is there a way to modify the bandpass coefficient equations in the 
> cookbook (the one from the analogue prototype H(s) = s / (s^2 + s/Q +
> 1)) such that the gain of the stopband may be specified? I want to be 
> able

I'm pretty sure that the BLT bandpass ends up with zeros at DC and nyquist so 
I'm not sure how you're going to define stopband gain in this case :)

Maybe start with the peaking filter and scale the output according to your 
desired stopband gain and then set the peak gain to give 0dB at the peak.

peakGain_dB = -stopbandGain_dB

(assuming -ve stopbandGain_dB).

Does that help?

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Lerping Biquad coefficients to a flat response

2013-01-03 Thread Thomas Young
One for RBJ if he's back from his hols :) or anyone kind enough to answer of 
course...

Is there a way to modify the bandpass coefficient equations in the cookbook 
(the one from the analogue prototype H(s) = s / (s^2 + s/Q + 1)) such that the 
gain of the stopband may be specified? I want to be able 

My thought was to interpolate the coefficients towards a flat response, i.e.

b0=1 b1=0 b2=0
a0=1 a1=0 a2=0

I tried some plots and this does basically work except that the characteristics 
of the filter are affected (namely the peak gain exceeds 0dB).

Thomas Young
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Ghost tone

2012-12-06 Thread Thomas Young
I guess it may be related to the physics of your ear, but I would be inclined 
to think it is the simpler explanation of a fortuitous combination of wave 
fronts creating high pressure rather than the physics of your inner ear (which 
I don't know much about but is complicated enough that I would be wary of 
explanations of this sort of phenomenon in those terms).


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 06 December 2012 10:28
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

Mmmh can you explain why it's there (where it's from, I mean, this would mean 
that out of the same harmonics, just with different phase relationship, very 
low tones could be produced?), & how to see it?

I got another reply suggesting it's due to the ear's compressor, & that seems 
more belieable, also explains why it doesn't happen with the other, more 
continuous version. The gap between peak would suggest a tone around 30hz, 
which could really be it. It would also imply that the ear's compression has a 
very short attack/release time, for the "compression envelope" pulsating fast 
enough to be in the audible range.




-Message d'origine-
From: Thomas Young
Sent: Thursday, December 06, 2012 11:20 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Ghost tone

1) The low frequencies are audible
2) It's not speaker distortion, the low frequencies are present in the signal

I think the spectrum of the first signal can be a bit misleading, if you are a 
bit more selective about where you take the spectrum (i.e. between the 
asymptotic sections) the low frequency contribution is easier to see.

The unpleasant "pressure" effect is exactly that, sound pressure waves. The 
strength will be dependent on the acoustics of your environment, it will be 
particularly objectionable if your ears happen to be somewhere where a lot of 
the wavefronts collide. The proximity of headphones to your ears is no doubt 
exacerbating the effect, especially in the very low frequencies which would 
otherwise bounce all over the place and diffuse.

Mics generally won't pick up very low frequencies - or more accurately their 
sensitivity to lower frequencies is very low.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 06 December 2012 05:50
To: A discussion list for music-related DSP
Subject: [music-dsp] Ghost tone

Hi,

Here's something to listen to:
http://flstudio.image-line.com/help/publicfiles_gol/GhostTone.wav


It's divided in 2 parts, the same bunch of sine harmonics in the upper 
range, only difference is the phase alignment. (both will appear similar 
through a spectrogram)

Disregarding the difference in sound in the upper range, 1. anyone confirms 
the very low tone is very audible in the first half?
2. (anyone confirms it's not speaker distortion?) 3. anyone knows about 
litterature about the phenomenon?

While I can understand where the "ghost tone" is from, I don't understand 
why it's audible. I happen to have hyperacusis & can't stand the low traffic 
rumbling here around, and I was wondering why mics weren't picking it, as I 
perceive it very loud. I hadn't been able to resynthesize a tone as nasty 
until now, mainly because I was trying low tones alone, and I can't hear 
simple sines under 20Hz.
The question is why do we(?) hear it, why is so much "pressure" noticable 
(can anyone stand it through headphones? I find the pressure effect very 
disturbing).
Strangely enough, I find the tone a lot more audible when (through
headphones) it goes to both hears, not if it's only left or right.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2012.0.2221 / Base de donnees virale: 2634/5439 - Date: 05/12/2012 

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Ghost tone

2012-12-06 Thread Thomas Young
1) The low frequencies are audible
2) It's not speaker distortion, the low frequencies are present in the signal

I think the spectrum of the first signal can be a bit misleading, if you are a 
bit more selective about where you take the spectrum (i.e. between the 
asymptotic sections) the low frequency contribution is easier to see.

The unpleasant "pressure" effect is exactly that, sound pressure waves. The 
strength will be dependent on the acoustics of your environment, it will be 
particularly objectionable if your ears happen to be somewhere where a lot of 
the wavefronts collide. The proximity of headphones to your ears is no doubt 
exacerbating the effect, especially in the very low frequencies which would 
otherwise bounce all over the place and diffuse.

Mics generally won't pick up very low frequencies - or more accurately their 
sensitivity to lower frequencies is very low. 


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 06 December 2012 05:50
To: A discussion list for music-related DSP
Subject: [music-dsp] Ghost tone

Hi,

Here's something to listen to:
http://flstudio.image-line.com/help/publicfiles_gol/GhostTone.wav


It's divided in 2 parts, the same bunch of sine harmonics in the upper range, 
only difference is the phase alignment. (both will appear similar through a 
spectrogram)

Disregarding the difference in sound in the upper range, 1. anyone confirms the 
very low tone is very audible in the first half?
2. (anyone confirms it's not speaker distortion?) 3. anyone knows about 
litterature about the phenomenon?

While I can understand where the "ghost tone" is from, I don't understand why 
it's audible. I happen to have hyperacusis & can't stand the low traffic 
rumbling here around, and I was wondering why mics weren't picking it, as I 
perceive it very loud. I hadn't been able to resynthesize a tone as nasty until 
now, mainly because I was trying low tones alone, and I can't hear simple sines 
under 20Hz.
The question is why do we(?) hear it, why is so much "pressure" noticable (can 
anyone stand it through headphones? I find the pressure effect very disturbing).
Strangely enough, I find the tone a lot more audible when (through
headphones) it goes to both hears, not if it's only left or right. 

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] i need a knee

2012-08-10 Thread Thomas Young
Not getting a response on a mailing list doesn't mean people are disrespecting 
you, we can do without the self righteousness thank you.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bastian Schnuerle
Sent: 10 August 2012 11:24
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] i need a knee

ok, i got it by myself, took a while .. but a small hint would have been nice, 
you guys have all those books i can not afford and i am only a ee dipl.ing. and 
they wanted me to build bombs and instead i am coding musical instruments, you 
should respect that .. thanks



Am 20.07.2012 um 08:17 schrieb robert bristow-johnson:

> On 7/19/12 4:19 PM, Bastian Schnuerle wrote:
>> hey everybody,
>>
>> i am trying to compute a knee value in peak limiting. my code below 
>> works quite nice for very loud singals (which i like), but if not so 
>> loud values are processed this code introduces distortion. in the 
>> mycode there a some way-off alterations, which made the signal being 
>> quite smooth for loud signal, but as soon as it come to low peak, 
>> they/i are/am failing.
>>
>> i would be very happy if you guys could post me some suggestions why 
>> this beaviour for low peaked signals appear and maybe your are 
>> willing to share altererations to my code to make it universal 
>> working ?!
>>
>> .//...
>>
>> static const float log6dB = 6.02059991327962f;
>
>20*log10(2.0)
>
> the dB equivalent to "one octave".
>
>> const float kNee = log6dB-GetKneeDBfromGUI();
>
>GetKneeDBfromGUI() returns the knee in dB relative to what?  the 
> rails?
>
>> mFinalEnv = mProcessSignal;
>
> how is mProcessSignal defined?
>
>> float kneeGain = 1.0;
>> if( mFinalEnv > 1.0f ) gain = 1.0f/mFinalEnv;
>
> and where is gain used?
>
>>
>> const float outLimit = 2.0f*1.0f;
>>
>> float curOutLimit6dBBelow = curOutLimit/2.0f; const float 
>> maxUnequalAtt = -10.0f;
>>
>> if( mFinalEnv > curOutLimit6dBBelow ) //6db below limit {
>> floatinv = curOutLimit6dBBelow/mFinalEnv;
>>
>> // do knee computation
>> floatkneePos1 = 0;
>> floatkneePos2 = 0;
>>
>> if( mFinalEnv >= curOutLimit )
>> {
>> kneePos1 = 1.0f;
>> kneePos2 = 1.0f;
>> }
>> else
>> {
>> // create two knee curves
>> floatxPos = (mFinalEnv - curOutLimit6dBBelow)/ 
>> (curOutLimit-curOutLimit6dBBelow);
>> kneePos1 = xPos*xPos*xPos*xPos;
>>
>> kneePos2 = xPos;
>> }
>>
>> // xfade between the knees
>> float kneeFactor = kNee/log6dB;
>
> is kNee ever anything other than 1?
>
>> kneeFactor = ((1.0f-kneeFactor)*kneePos1 + 
>> kneeFactor*kneePos2)/7;
>
> i see the xfade.  i don't see why you are dividing by 7.
> .
>
>>
>> floatoverallAttDb = FastLinToDb(inv);
>> kneeAtt = FastDbToLin(get_max(gain*(overallAttDb*
>> (kneeFactor)),maxUnequalAtt));
>>
>
> okay, "gain" is used here.
>
>> kneeGain = kneeAtt;
>>
>> }
>>
>> }
>>
>> for( int fc = 0; fc < channels; ++fc ) {
>> float val = mFinalVectors[fc][mFinaTail[fc]] * kneeGain; }
>>
>> ..//..
>
> can you define mathematically you're trying to do?  i can see that you 
> have some sorta mix between linear and x^4.  i realize this is for a 
> soft-knee limiter of some sort.  but i cannot grok the intended math 
> to be accomplished with this code.  can you just state the math (with 
> "if" statements, where needed)?
>
> --
>
> r b-j  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-11 Thread Thomas Young
GA isn't really supposed to mimic the real world as closely as you are 
suggesting, in the real world the fitness criteria for the evolution is/was 
incredibly complicated and evolving itself. I'd be interested to know what 
fitness criteria you used in your experiment, because unless you are quite 
strict with your requirements you will never really tend towards something 
usable (again maybe you think this is cheating to some degree). It relates to 
that other thread which is going on about Timbre classification, unless you are 
able to program some metric by which to judge the quality of the result you 
will never be able to tell it whether it has succeeded or failed.

I personally don't consider these things as cheating, it is just setting the 
domain, starting point and criteria by which to judge success. As you say maybe 
this doesn't meet some people's expectations of what GA's can do.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of douglas repetto
Sent: 11 June 2012 18:36
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Pointers for auto-classification of sounds?


Sure, that's a reasonable perspective from an efficiency standpoint. But I 
think that for many people the "romance" of using something like a GA is the 
promise of something from nothing. After all, that's the deep mystery at the 
heart of biological evolution -- how in the world did it kick off? Biological 
evolution didn't start at a "sensible starting point". Of course, biological 
evolution also didn't start out doing mutations on chromosomes or playing with 
different allele crossover processes, so the whole metaphor is pretty tortured.

That's really why I mostly lost interest in that domain -- the realization that 
in order to efficiently generate audible output I'd have to build a lot of 
musical/perceptual "cheats" into the process. But the whole point for me at the 
time was the promise of alien worlds of sound being evolved, free from my own 
musical perspective. Turns out there's not really any good shortcut for the 
equivalent of several billions of years of evolution.

douglas



On 6/11/12 1:23 PM, Thomas Young wrote:
> I think when you say "'cheat' and seed it with some viable algorithms" 
> you are misrepresenting genetic algorithms somewhat. They all need 
> some starting point from which to evolve and a sensible domain in 
> which to work. If you set that starting point too low and that domain 
> incorrectly then the time they take to find something 'good' and the 
> amount of prodding they need to get there will be dramatically 
> increased.
>
> You have to bear in mind that things like arithmetic operations and 
> stack operations on individual samples or groups of samples is not low 
> level with respect to music or signal processing, it is low level with 
> respect to computers and number crunching. Producing algorithms which 
> operate on low level signal concepts like oscillators and operating in 
> a musical domain (i.e. the frequency domain) would not be 'cheating', 
> just picking a sensible starting point :)
>
> -Original Message- From: music-dsp-boun...@music.columbia.edu
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of douglas 
> repetto Sent: 11 June 2012 18:14 To: A discussion list for 
> music-related DSP Subject: Re: [music-dsp] Pointers for 
> auto-classification of sounds?
>
> On 6/11/12 1:10 PM, Thomas Young wrote:
>>> I was super excited until I realized that the sound was the result 
>>> of a bug in my code
>> Bug, or awesome new feature?
>
> Ha. I guess I could argue that my awesome software did "evolve" that 
> sound, as long as I defined "my awesome software" as the process of 
> writing the software rather than the software itself. It was 
> definitely an unanticipated sound, which is generally what motivates 
> people to work with things like GAs in the first place...
>
> douglas
>
>
> -- ... http://artbots.org 
> .douglas.irving http://dorkbot.org 
> .. http://music.columbia.edu/cmc/music-dsp
> ...repetto. http://music.columbia.edu/organism
> ... http://music.columbia.edu/~douglas
>
> -- dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp -- dupswapdrop
> -- the music-dsp mailing list and website: subscription 

Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-11 Thread Thomas Young
I think when you say "'cheat' and seed it with some viable algorithms" you are 
misrepresenting genetic algorithms somewhat. They all need some starting point 
from which to evolve and a sensible domain in which to work. If you set that 
starting point too low and that domain incorrectly then the time they take to 
find something 'good' and the amount of prodding they need to get there will be 
dramatically increased.

You have to bear in mind that things like arithmetic operations and stack 
operations on individual samples or groups of samples is not low level with 
respect to music or signal processing, it is low level with respect to 
computers and number crunching. Producing algorithms which operate on low level 
signal concepts like oscillators and operating in a musical domain (i.e. the 
frequency domain) would not be 'cheating', just picking a sensible starting 
point :)

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of douglas repetto
Sent: 11 June 2012 18:14
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Pointers for auto-classification of sounds?

On 6/11/12 1:10 PM, Thomas Young wrote:
>> I was super excited until I realized that the sound was the result of 
>> a bug in my code
> Bug, or awesome new feature?

Ha. I guess I could argue that my awesome software did "evolve" that sound, as 
long as I defined "my awesome software" as the process of writing the software 
rather than the software itself. It was definitely an unanticipated sound, 
which is generally what motivates people to work with things like GAs in the 
first place...

douglas


--
... http://artbots.org 
.douglas.irving http://dorkbot.org 
.. http://music.columbia.edu/cmc/music-dsp
...repetto. http://music.columbia.edu/organism
... http://music.columbia.edu/~douglas

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-11 Thread Thomas Young
> I was super excited until I realized that the sound was the result of a bug 
> in my code

Bug, or awesome new feature?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of douglas repetto
Sent: 11 June 2012 18:06
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Pointers for auto-classification of sounds?

On 6/11/12 9:33 AM, Charles Turner wrote:
> Ross pretty much guessed my interest. Trying to see whether it's 
> possible to automate the exploration of the parameter space of a 
> synthesis algorithm. Imperative to something like that would be a 
> procedure to analyze large quantities of sound data. Hence timbre 
> classification. Of course, the process could be really simple:
> eliminating the settings that produced zero output, or immediately 
> went into self-oscillation. But who knows where the line between "too 
> many to audition" and convincing timbre classification lies?


Years ago I worked a lot on some genetic algorithms that used low level 
operations (multiplication, store on a stack, etc) to evolve synthesis 
routines. Or hoped to anyway. The GA code wasn't hard, but evaluating the 
output from a given algorithm was a killer. I ended up with really simple 
things like "is the signal non-zero?" "is there any variation in the 
amplitude?" The range of valid 1 second 44.1k sounds is VAST, and within that 
vastness is a tiny island (or archipelago) of audible sounds.

It got to the point where I realized that either I'd have to "cheat" and seed 
it with some viable algorithms or add in some more sophisticated "primitives", 
like "take the sine". One day I ran the GA and an AMAZING SOUND came out:

http://music.columbia.edu/~douglas/evo.wow.wav (4mb)

I was super excited until I realized that the sound was the result of a bug in 
my code. I promptly gave up.

Now that incredible (ly annoying) sound is my ringtone.


douglas


--
... http://artbots.org 
.douglas.irving http://dorkbot.org 
.. http://music.columbia.edu/cmc/music-dsp
...repetto. http://music.columbia.edu/organism
... http://music.columbia.edu/~douglas

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Pointers for auto-classification of sounds?

2012-06-08 Thread Thomas Young
You haven't really explained which aspect of the timbre you want to use to 
organise the sounds, and timbre is such a catch-all word that you need to 
specify the characteristics you are looking for in more detail to have any 
chance of producing something useful.

Categorising by amplitude envelope would be pretty straight forward since 
amplitude over time is something you can easily measure and process before 
judging it against some metric or heuristic. You would need to pick similar 
aspects of the timbre, things you can quantatively measure, in order to develop 
a scheme for categorisation.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Charles Turner
Sent: 08 June 2012 18:36
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Pointers for auto-classification of sounds?

Hi all-

I was initially hesitant to post to the list as I haven't explored this topic 
very deeply, but after a second thought I said "what the hell," so please 
forgive if my Friday mood is more lazy than inquisitive.

Here's my project: say I have a collective of sound files, all short and the 
same length, say 1 second in length. I want to classify them according to 
timbre via a single characteristic that I can then organize along one axis of a 
visual graph.

The files have these other properties:

  . Amplitude envelope. I don't need to classify by time characteristic, but 
samples could have different characteristics,
ranging from complete silence, to a classical ADSR shape, to values pegged 
at either +-100% or 0% amplitude.

  . Timbre. Samples could range in timbre from noise to
(hypothetically) a pure sine wave.

Any ideas on how to approach this? I've looked at a few papers on the subject, 
and their aims seem somewhat different and more elaborate than mine (instrument 
classification, etc). Also, I've started to play around with Emmanuel 
Jourdain's zsa.descriptors for Max/MSP, mostly because of the eazy-peazy 
environment. But what other technology (irrespective of language) might I 
profit from looking at?

Thanks so much for any interest!

Best,

Charles
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-04 Thread Thomas Young
lol wow

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bram de Jong
Sent: 04 April 2012 16:15
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] maintaining musicdsp.org

On Wed, Apr 4, 2012 at 5:11 PM, Thomas Young  
wrote:
> Maybe submissions should be added to a moderation queue rather than added 
> directly (i.e. they need to be manually whitelisted). I don't think a super 
> quick turnaround on new algorithm submissions is really important for 
> something like musicdsp.org.

they ARE added to a queue.
the queue now contains about 500 spam submissions.
that's the whole (current) problem.

some kind of "report as spam" thing for the comments would be nice too as there 
are SOME (but few) spam comments.

 - bram
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] maintaining musicdsp.org

2012-04-04 Thread Thomas Young
Maybe submissions should be added to a moderation queue rather than added 
directly (i.e. they need to be manually whitelisted). I don't think a super 
quick turnaround on new algorithm submissions is really important for something 
like musicdsp.org.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bram de Jong
Sent: 04 April 2012 16:07
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] maintaining musicdsp.org

On Wed, Apr 4, 2012 at 3:31 PM, Bastian Schnuerle 
 wrote:
> what is exactly the roadmap and tasks to do ? i think i could find 
> some helping hands for you, including mine .. maybe altogether we find 
> a way to get some work away from you ?

oh there is absolutely no roadmap! it just needs some love to stop the spammers 
from submitting DSP algorithms (about well known drugs and handbags I won't 
describe here lest I end up in spam filters). and, if anyone has some new/fresh 
ideas for it and someone else feels like implementing those, always welcome!

 - bram
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-24 Thread Thomas Young
Yes, I'm pretty sure Google programmers are smart enough to render a sine wave 
should they so desire. It was obviously an artistic decision, be it a somewhat 
misguided one :P

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 24 February 2012 07:56
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] google's non-sine

Eh, I still say they weren't going for a sine wave at all. Look at their other 
doodles. I'm sure that their designers would have felt that a sine wave would 
have missed the point for them.

http://www.zazzle.com/robert_schumanns_200th_birthday_tshirt-235517387819488097


On Feb 23, 2012, at 3:27 PM, douglas repetto wrote:
> 
> But it's Google!!! Surely they have the resources to generate a sinewave 
> animation that features an actual sinewave if they want to.
> 
> I know it's a silly thing to rant about. But the Google front page has a lot 
> of reach (how many millions of hits a day?), and it gives me deep nerd pain 
> to think about something so fundamental and so beautiful -- yes, there's a 
> deep connection between a circle and a sinewave! -- being botched.
> 
> I'll stop ranting now!
> 
> douglas
> 
> On 2/23/12 9:53 AM, Didier Dambrin wrote:
>> There's also the fact that it's not easy to draw a sinewave in existing
>> tools out there.
>> Those who have drawn GUIs here and had to show waveforms know what I
>> mean, I remember I've ended up with google-like non-sines as I was
>> trying to draw a sine using 2 half ellipses. It may be what happened to
>> the guy who drew that.
>> Ask yourself how you'd do it.. the most accurate would be to use a real
>> plot of a sine, but now good luck converting that to vectors in a proper
>> image editing tool, for a nice antialiased display.
>> 
> 
> 
> -- 
> ... http://artbots.org
> .douglas.irving http://dorkbot.org
> .. http://music.columbia.edu/cmc/music-dsp
> ...repetto. http://music.columbia.edu/organism
> ... http://music.columbia.edu/~douglas
> 
> --

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Introducing myself (Alessandro Saccoia)

2012-02-23 Thread Thomas Young
> float pan = sin(2 * PI * frequency * time++ / 44100);

As 'time' increases, changes to 'frequency' will result in larger and larger 
discontinuities. You should offset (add) the change in time rather than 
multiplying by it.  

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Bill Moorier
Sent: 23 February 2012 18:05
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Introducing myself (Alessandro Saccoia)

Thanks Alessandro!  Unfortunately I don't think this is the problem though.

I added a simple moving average on the parameter and it didn't make
the nasty artifacts go away.  So I rewrote the whole thing as a VST so
I can post more code without having to reveal my messy in-progress
javascript framework ;)  Here it is:
http://abstractnonsense.com/23-feb-2012-cc.html

So if I sweep the parameter smoothly with this VST loaded in Ableton,
I get nasty artifacts as the sweep is happening.  I can see from the
trace that the parameter isn't changing too quickly, and the resulting
"frequency" variable always changes by less than 0.01Hz between
samples.  But the "pan" variable ends up changing by a lot - often
nearly as much as 0.1 between samples!

Thinking about it some more, I'm getting suspicious of the code that
writes to the "pan" variable - it's supposed to be an LFO.  The thing
that makes me suspicious is, if we change the frequency, even very
slightly, between samples then we're "gluing together" two different
sine waves.  But we're not taking into account phase!  If the "time"
variable gets big (which it will), then there doesn't seem to be any
reason to believe that two slightly-different-frequency sine waves
would have almost the same values at the current "time".  The ends of
the sine waves might not match up when we glue them together!

Am I on the right track with this line of thinking?  Is there a better
way to write a variable-frequency LFO than just:
float pan = sin(2 * PI * frequency * time++ / 44100);

Thanks again,
Bill.



> Hello Bill,
> I take your question as a chance to introduce myself.
> When you sweep the input parameter you are introducing discontinuities in the 
> output signal, and that sounds awful.
> The simplest case to figure that out in your code is imagining that you have 
> the input variable set at 0 (pan = 0), and then abruptly you change the input 
> parameter to a value that will make the value of the pan variable jump to 1. 
> Both of the channels will generate a square wave, that will sound badly 
> because of the aliasing. One solution is to control the slew rate of your 
> parameter lowpass filtering your parameter. A simple moving average filter 
> should do the job correctly.
>
> I have been reading this newsletter for a couple of years now, and I think 
> that it's the best place to learn about the practical applications of musical 
> dsp. I have been working in the digital audio field since 3 years now, even 
> though I have been interested in computer music since my first years at the 
> university.
> Now I am freelancing in this field, and I also get to play music more often. 
> This is really stimulating my imagination, and I hope that in the next months 
> I will have the time to implement some new effect or instruments.
> Thank you for all the nice things that I have learnt here,
>
> Alessandro

-- 
Bill Moorier abstractnonsense.com | @billmoorier
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-23 Thread Thomas Young
Ringing in your ears due to exposure to loud noise is the stereocilia (small 
hair cells) being damaged and falsely reporting to your brain that there is 
still sound vibration present. The frequency of the ringing is not a function 
of the sound that damaged your ears (a super loud bassy sound doesn't cause a 
bassy ringing in your ears).

Tom

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of David Olofson
Sent: 23 February 2012 16:15
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] a little about myself

On Thursday 23 February 2012, at 17.10.52, David Olofson  
wrote:
> On Thursday 23 February 2012, at 16.17.38, Adam Puckett
> 
>  wrote:
> > Interesting. How would you make an ear ringing sound?
[...]
> A more realistic approach might be to FFT the offending audio snippet that
> triggers the effect, remove all bands but the "too loud" peaks and then
> IFFT that to create the "ring" waveform.

Oh, wait. Probably throw in some heavy distortion before the FFT, 
approximating the mechanical distortion in the ear...? Also, the whole thing 
needs to take in account that the "too loud" level is highly frequency 
dependent.

Nothing is ever nearly as simple as it may seem at first. ;-)


-- 
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|   http://consulting.olofson.net  http://olofsonarcade.com   |
'-'
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] google's non-sine

2012-02-22 Thread Thomas Young
Haha, great stuff

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Phil Burk
Sent: 22 February 2012 19:26
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] google's non-sine

I couldn't help myself. The Google waveform appears to be made of random 
elliptical segments.  Here is a JSyn Applet that plays the "wave doodle":



Phil Burk
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] a little about myself

2012-02-22 Thread Thomas Young
I've tried that before, basically writing a soft synth and a sequencer. I won't 
pretend I made anything that sounded very good but it was fun. In the demoscene 
there is a whole field of 'programmed music' which is just this, although 
generally they use a sequencer for the composition and just code the 
instruments.

I have to say I've never really seen the appeal of CSound since it seems as 
complicated as just writing your own code in C/C++, maybe I am missing some 
great use case for it.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Adam Puckett
Sent: 22 February 2012 13:45
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] a little about myself

It's nice to see some familiar names in Csound's defense.

Here's something I've considered since learning C: has anyone
(attempted to) compose music in straight C (or C++) just using the
audio APIs? I think that would be quite a challenge. I can see quite a
bit more algorithmic potential there than probably any of the DSLs
written in it.

On 2/21/12, Michael Gogins  wrote:
> It's very easy to use Csound to solve idle mind puzzles! I think many
> of us, certainly myself, find ourselves becoming distracted by the
> technical work involved in making computer music, as opposed to the
> superficially easier but in reality far more difficult work of
> composing.
>
> Regards,
> Mike
>
> On Tue, Feb 21, 2012 at 7:53 PM, Emanuel Landeholm
>  wrote:
>> Well. I need to start using csound. To actually do things in the real
>> world instead of just solving idle mind puzzles.
>>
>> On Tue, Feb 21, 2012 at 10:02 PM, Victor  wrote:
>>> i have been running csound in realtime since about 1998, which makes it
>>> what? about fourteen years, however i remember seeing code for RT audio
>>> in the version i picked up from cecelia.media.mit.edu back in 94. So,
>>> strictly this capability has been there for the best part of twenty
>>> years.
>>>
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book reviews,
>> dsp links
>> http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>
>
>
> --
> Michael Gogins
> Irreducible Productions
> http://www.michael-gogins.com
> Michael dot Gogins at gmail dot com
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Signal processing and dbFS

2012-01-18 Thread Thomas Young
I'm just highlighting Uli's post which I think contains the answer to her main 
question of how to convert a normalised matlab plot into dbFS. I don't mean to 
denigrate what other people have said.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Nigel Redmon
Sent: 18 January 2012 15:58
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Signal processing and dbFS

If we're "overthinking" it, it's probably because we're not sure what Linda's 
after. I'm sure she knows the formula for voltage conversion to dB.


On Jan 18, 2012, at 2:34 AM, Thomas Young wrote:
> Most people seem to be overthinking this, Uli has posted the important 
> equation here:
> 
> Y [dBFS] = 20*Log10(X/FS)
> 
> That is all you need for converting normalised peak values to dbFS.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Signal processing and dbFS

2012-01-18 Thread Thomas Young
Most people seem to be overthinking this, Uli has posted the important equation 
here:

Y [dBFS] = 20*Log10(X/FS)

That is all you need for converting normalised peak values to dbFS.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Uli Brueggemann
Sent: 18 January 2012 08:20
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Signal processing and dbFS

My simple point of view about dBFS (full scale):
The full scale FS of a 16 bit soundcard is 2^15=32768, of a 24 bit
soundcard it is 2^23 = 8388608.
The dB number Y of a value X represents a relation to the full scale:
Y [dBFS] = 20*Log10(X/FS)
So with X=32786 and a 16 bit soundcard you get Y =
20*Log10(32768/32768) = 20*Log10(1) = 20*0 = 0 dBFS

The result 32768/32768 = 1.0 (or 8388608/8388608) explains, why the
DSP maths often use values in the range -1.0 .. +1.0.
Of course with floating point numbers you can also use higher values.
At least until you try to send the result to a soundcard. Converting a
float number to an integer value by multiplication with the resolution
of the soundcard will lead to clipping if you have values exceeding
the full scale.

All this consideration has nothing to do with the output voltage of
the soundcard. Thus 0 dBFS may result in a level of -10 dBV or + 4dBu
or whatever the analog reference level is.

In practical applications it is always necessary to avoid clipping in
the digital domain and in the analog domain. This means to leave some
headroom. E.g if you define a headroom of 12 dB for yourself as
optimal then you may arbitrarily define -12 dBFS as 0 dBLS (Linda S).
That's why there is all the confusion with different definitions
around. Especially if you forget to mention LS and talk about 0 dB
only instead of 0 dBLS :-)

Uli



On Wed, Jan 18, 2012 at 7:34 AM, Linda Seltzer
 wrote:
> Good evening from snowy Washington State,
>
> Those who know me personally know that my background in music DSP is via
> the AT&T Labs / Bell Labs signal processing culture.  So for the first
> time I have encountered the way recording studio engineers think about
> measurements.  They plot frequency response graphs using dbSF on the Y
> axis.
>
> Recording engineers are accustomed to the idea of dbSF because of the
> analog mixes where the needle looked like it was pegging at 0dB.  The
> problem is what this means in signal processing.  My understanding is that
> different manufactureres have different voltage levels that are assigned
> as 0dB.  In signal processing we are not normally keeping track of what
> the real world voltage level is out there.
>
> When I am looking at a signal in Matlab it is a number in linear sampling
> from the lowest negative number to the highest negative number.  Or it is
> normalized to 1.
>
> I don't know how to translate from my plot of a frequency response in
> Matlab to a graph using dbFS.  Visually a signal processing engineer
> wouldn't think about the peaks as much as a recording engineer sitting at
> a mixing console does.  I can see that from their point of view they are
> worrying about the peaks because of worries about distortion.  A signal
> processing engineer assumes that was already taken care of by the times
> the numbers are in a Matlab data set.  The idea of plotting things based
> on full scale is a bit out of the ordinary in signal processing because
> many of our signals never approach full scale.  Telephone calls are
> different from rock music.
>
> Any comments on this issue would be greatly appreciated.
>
> Linda Seltzer
>
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] anyone care to take a look at the Additive synthesis article at Wikipedia?

2012-01-16 Thread Thomas Young
I'd like to say well done to everyone who has edited this so far, it looks 
massively better :)

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 16 January 2012 16:16
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] anyone care to take a look at the Additive synthesis 
article at Wikipedia?

On 1/16/12 1:16 AM, Nigel Redmon wrote:
> Nice improvements.
>
> This may seem like nitpicking, but the "Timeline of additive synthesizers" 
> section seems to choose keeping the instrument name as the start of the 
> sentence over proper grammar. For instance:
>
>Hammond organ, invented in 1934[26], is an electronic organ that uses nine 
> drawbars to mix several harmonics, which are generated by a set of tonewheels.
>
> This should either read
>
>The Hammond organ, invented in 1934[26], is an electronic organ that 
> uses...
>
> or something like
>
>Hammond organ-invented in 1934[26], the Hammond organ is an electronic 
> organ that uses...
>
> or
>
>Hammond organ: Invented in 1934[26], the Hammond organ is an electronic 
> organ that uses...
>
> (Note that one entry, "EMS Digital Oscillator Bank (DOB) and Analysing Filter 
> Bank: According to...", does it this way already.)
>
> You have enough cooks working on that page right now, so I'd rather leave it 
> up to you guys what route you go. But if you use a sentence, it should read 
> like one.
>

well, there is evidence that Clusternote is from Japan.  dunno if these 
sentences were written by him.

i wouldn't discourage you from editing at all.

L8r,

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] anyone care to take a look at the Additivesynthesis > article at Wikipedia?

2012-01-11 Thread Thomas Young
Man I wish I hadn't gone to that wiki page now, it really is a mess and there 
are some pretty glaring errors (missing brackets on the summation in the 
Fourier series equation, and citation needed... wtf?)


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Dave Hoskins
Sent: 11 January 2012 15:36
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] anyone care to take a look at the Additivesynthesis > 
article at Wikipedia?



On 11/01/2012 14:39, Alen Koebel wrote:
> Also, things that are considered correct don't necessarily stay correct.
>   
> That also works in reverse. Things that were correct on Wikipedia can be 
> become incorrect in the blink of an eye. For every contributor that actually 
> knows something about a subject there are a hundred that don't but think they 
> do, and more than a few of the latter want the attention and sense of power 
> that squatting on a topic in Wikipedia can provide. See "Clusternote."

That sense of peer review power also means that some technical pages 
lose their language based explanations and verbose meanings, leaving 
only formulas on the whole; out of the reach of many trying to 
understand the concepts and basics of the subjects themselves.

>
>> This is also a major advantage of online encyclopedias vs (paper) books,
>> a book only contains the knowledge at one time.
>   
> The idea of freely available online information is good. But that's not the 
> same thing as saying that Wikipedia is the best way to go about that. 
> Wikipedia is knowledge by mob rule. If you use it be very careful.

Don't forget that when someone says something odd or hard to believe as 
fact, people joke "Did you get that from Wikipedia?"
Well the do here in the UK anyway : )









--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] anyone care to take a look at the Additivesynthesis > article at Wikipedia?

2012-01-11 Thread Thomas Young
Wikipedia isn't always 100% accurate but it's still the most well presented and 
informative source for a huge variety of topics, people should remember that it 
is an encyclopaedia as well, not a reference book for advanced signal 
processing.

On a slightly more philosophical note I think the idea of a free, community 
maintained repository of knowledge about everything human beings have learned 
is a beautiful thing and I can't see how anyone could think it is bad. Some 
child in Africa with a cheap laptop has the entirety of human knowledge at 
their fingertips.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 11 January 2012 11:41
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] anyone care to take a look at the Additivesynthesis > 
article at Wikipedia?

You don't have to convince me, because I have a funny story about a (small) 
mistake about the history of the company here, in a wikipedia article. One 
day I see that same mistake on our own website, and ask the one who setup 
the page where he got that information.. wikipedia of course.  So a mistake 
on wikipedia kinda got relayed & "made true" by.. ourselves :)

But it's still the place I'd trust the most. Yes there are other 
encyclopedias but generally limiting themselves to "important" subjects, 
while wikipedia has something on pretty much anything.

Also, things that are considered correct don't necessarily stay correct. 
This is also a major advantage of online encyclopedias vs (paper) books, a 
book only contains the knowledge at one time. (& books also tend to be 
filled with hot hair for the sake of filling them)

Can't comment on the problem of writing at wikipedia, but seeing that it 
isn't filled with hippie/spiritual crap (& we know how much new age crap 
there is around audio), that's a sign that it's somewhat moderated. You 
still can't see any article about "solfeggio frequencies" there, while I'm 
sure that some assholes must have tried already. But there certainly are 
*books* about that crap.






-Message d'origine- 
From: Thomas Strathmann
Sent: Wednesday, January 11, 2012 10:27 AM
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] anyone care to take a look at the Additivesynthesis 
 > article at Wikipedia?

On 1/11/12 9:39 , Didier Dambrin wrote:
> How is wikipedia a bad idea?  Only because it has mistakes? Sure, but
> it's still a lot more reliable than any other source on the net. All
> encyclopedias can have mistake & any information has to be verified 
> anyway.
> What else would replace it? Certainly not random discussions in random
> forums/newsgroups or random articles on shady websites.

I do not think that Wikipedia is a bad idea. The problem is that
everybody can contribute and people tend to get into arguments and
editing wars like they would in a debate. In some random discussion in a
random forum this would not be such a big problem because it's
transparent by the nature of the communication taking place (i.e. the
thread) that people of different opinion and background are having a
(possibly heated) discussion. On Wikipedia you have to dig a little
deeper before you uncover that exactly this is happening at the moment
for some given article. It's not immediately apparent when looking up
the page on a search term. That combined with the fact that some people
(even scholars) do cite Wikipedia is the real problem. Luckily there are
books and articles you can even get online in many cases that -- while
not necessarily being primary sources of information -- are excellent
secondary sources. For more in-depth information about some topics sites
like the Stanford Encyclopeadia of Philosophy is a better source than
Wikipedia. For Wikipedia, it would be nice to have an indication of the
"level of confidence" of any contributer. If one is a known expert in a
field you one is probably less likely to make fundamental mistakes, if
one is just an amateur that has read up on the topic in some random
discussion or article on a shady website there will surely be omissions
and mistakes in the article one writes. It should be understood that as
far as citeability or scholarly rigor is concerned, Wikipedia has to be
considered one of those shady websites. Some light, but also some very
dark spots and it's not always clear to the non-initiated (which, I'd
say, is the intended audience of an encyclopeadia) which is which. I
will leave it at that. Just move along, nothing of (on topic) value to
see here ...

Thomas
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


-
Aucun virus trouve dans ce message.
Analyse effectuee par AVG - www.avg.fr
Version: 2012.0.1901 / Base de donnees virale: 2109/4735 - Date: 10/01/2012 

--

Re: [music-dsp]   Splitting audio signal into N frequency bands

2011-11-02 Thread Thomas Young
'Biquad' is a filter topology meaning the transfer function is expressed as a 
quadratic divided by another quadratic (biquadratic). They are pretty common in 
the world of digital filters because there are very simple to convert into an 
efficient algorithm (Direct form 1 etc...). See 
http://en.wikipedia.org/wiki/Digital_biquad_filter

A Butterworth filter has a particular transfer function, see 
http://en.wikipedia.org/wiki/Butterworth_filter. I think there are a few 
implementations in the music DSP archives.

I think David made a good point that you don't really want an aggressive cut 
off with your filter, since sounds which straddle the crossover frequencies can 
suffer quite badly. As he suggests using simpler, less aggressive filters (e.g. 
pretty much any first or second order iir filter) will probably be acoustically 
a lot nicer.

Thomas

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thilo Köhler
Sent: 02 November 2011 12:10
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp]   Splitting audio signal into N frequency bands

Hello Thomas, Wen!

Thank you for the quick input on this.

1. I found that in the 3-band case, splitting up 
the low and high band from the input and then 
generating the mid band by subtracting them
works much better than the "salami" stategy
(chopping off slices with a LP).
Thanks!

2. 
> Subtracting the LP part makes sense only if the LP filter is zero-phase.
I dont know if my filters are zero phase, I am not that deep
into the filter math to tell you straight away. It is an IIRC taken from
here:
http://www.musicdsp.org/showArchiveComment.php?ArchiveID=259

This one seems to work best for my purposes, but that is just
from subjective listening wihtout any mathematical evidence.

Is this a butterworth filter like Thomas suggests? (sorry if the question
sounds like a noob...) In the comment they call it biquad, i dont know
if a biquad can be butterworth or this is mutual exclusive.

I have also tried:

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=266
Doesnt work well for low cutoff frequencies, like <150Hz.
I am using single precision.

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=117
Seems to be too flat, not steep enough.

http://www.musicdsp.org/showArchiveComment.php?ArchiveID=237
Seems to be too flat, not steep enough.

I think in the use case of a mulit-band compressor, perfect
reconstruction is important. That is my I want to create
the band by subtracting and not with independent filters.
I assume this is a good strategy, no?

Regards,

Thilo

> I  believe the typical way is to directly construct a series of steep
> band-pass  filters to cover the whole frequency range. This is very
> flexible but  usually means the individual parts do not accurately add up
> to the original  signal. On the other hand, if perfect sum is desirable
> you may wish to take  a look at mirror filters, such as QMF. These are
> pairs of LP and HP filters  designed to guarantee perfect reconstruction.
>
>
> --
> From: "Thilo K?hler" 
> Sent: Monday, October 31, 2011 10:47 AM
> To: 
> Subject: [music-dsp] Splitting audio signal into N frequency bands
>
>> Hello all!
>>
>> I have implemented a multi-band compressor (3 bands).
>> However, I am not really satisfied with the splitting of the bands,
>> they have quite a large overlap.
>>
>> What I do is taking the input singal, perfoming a low pass filter
>> (say 250Hz) and use the result for the low band#1.
>> Then I subtract the LP result from the original input and do
>> a lowepass again with a higher frequency (say 4000Hz).
>> The result is my mid band#2, and after subtracting again the remaining
>> signal is my highest band#3.
>>
>> I assume this proceedure is appropriate, please tell me otherwise
>>
>> The question is now the choise of the filter.
>> I have tried various filters from the music-dsp code archive,
>> but i still havent found a satisfiying filter.
>>
>> I need a steep LP filter (12db/oct or more),
>> without resonance and fewest ringing possible.
>> The result subtracted from the input must works as a HP filter.
>>
>> Are there any concrete suggestions how such a LP filter should look
>> like, or is there even a different, better way to split the audio signal
>> into 3 bands (or N bands)?
>>
>> I know I can use FFT, but for speed reasons, I want to avoid FFT.
>>
>> Regards,
>>
>> Thilo Koehler

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Splitting audio signal into N frequency bands

2011-10-31 Thread Thomas Young
I think you will get better results doing a lowpass & highpass, then 
subtracting those from your original signal to get a middle band.

For the filter I would have thought you want something as 'clean' as possible, 
i.e. no ripple, flat as possible in the passband and with a linear falloff (it 
shouldn't need to be particularly steep, but 12db/oct does sound about right). 
For me a Butterworth springs straight to mind, second order will give you 
12db/oct. It's pretty cheap as well if that is a concern.

I am interested to know how other people do the splitting into bands as well 
though, I plan on implementing a multiband compressor shortly myself. Multiple 
bandpass filters would seem logical, but there must be a bit of an issue with 
colouring the signal.

Thomas Young

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thilo Köhler
Sent: 31 October 2011 10:47
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Splitting audio signal into N frequency bands

Hello all!

I have implemented a multi-band compressor (3 bands).
However, I am not really satisfied with the splitting of the bands,
they have quite a large overlap.

What I do is taking the input singal, perfoming a low pass filter
(say 250Hz) and use the result for the low band#1.
Then I subtract the LP result from the original input and do
a lowepass again with a higher frequency (say 4000Hz).
The result is my mid band#2, and after subtracting again the remaining
signal is my highest band#3.

I assume this proceedure is appropriate, please tell me otherwise

The question is now the choise of the filter.
I have tried various filters from the music-dsp code archive,
but i still havent found a satisfiying filter.

I need a steep LP filter (12db/oct or more),
without resonance and fewest ringing possible.
The result subtracted from the input must works as a HP filter.

Are there any concrete suggestions how such a LP filter should look like,
or is there even a different, better way to split the audio signal
into 3 bands (or N bands)?

I know I can use FFT, but for speed reasons, I want to avoid FFT.

Regards,

Thilo Koehler

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Vectorising IIR Filters

2011-10-11 Thread Thomas Young
> doing so must necessarily increase the filter order to M

Yes I suppose that is the more technically accurate description of the 
'additional complexity' I was referring to. I held back from saying it 
'necessarily' increases the complexity, because I was thinking after expanding 
the equation you could potentially simplify the result, but on reflection I 
think you are correct that it necessarily increases the order of the filter and 
so necessarily increases the computational complexity.

> now, if pole/zero cancellation is somehow accomplished (by including terms of 
> input x[n] all the way back to x[n-M]), then the filter can appear (from the 
> POV of the input and output terminals) to be of smaller order and perhaps 
> equivalent to the original IIR spec

I was coming at it from more of a pure maths direction, in the sense that if 
you simply take the equation of the filter which has been discretised (i.e. not 
your transfer function but the direct form implementation or whatever you have 
with respect to discrete input and output samples) then you can do a literal 
substitution of output samples with their corresponding equation which is 
mathematically correct. 

I'm not sure if this counts as meeting the same specification, the impulse 
response should be identical but delayed by n samples, I guess the phase is not 
the same so you would not get an identical pole/zero plot. Might be interesting 
to look at those things but I don't have matlab here.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of robert 
bristow-johnson
Sent: 11 October 2011 17:32
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Vectorising IIR Filters



Thomas Young [thomas.yo...@rebellion.co.uk] writes:

> Refactoring the filter to be with respect to n outputs behind (when
> using vectors of length n) is an excellent idea. I was a bit skeptical
> that the maths was correct there, but having read it over and
> stuffed some numbers into excel in a highly unscientific manner it
> does seem to work and make sense.

what bothers me a little is that by incorporating y[n-M] (and perhaps some more 
current samples of y[n]) into the recursion for an IIR filter that was 
originally spec'd to have order of less than M, doing so must necessarily 
increase the filter order to M.  now, if pole/zero cancellation is somehow 
accomplished (by including terms of input x[n] all the way back to x[n-M]), 
then the filter can appear (from the POV of the input and output terminals) to 
be of smaller order and perhaps equivalent to the original IIR spec.  is that 
what is happening here?

--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Vectorising IIR Filters

2011-10-11 Thread Thomas Young
Thanks Sebastien.

Refactoring the filter to be with respect to n outputs behind (when using 
vectors of length n) is an excellent idea. I was a bit skeptical that the maths 
was correct there, but having read it over and stuffed some numbers into excel 
in a highly unscientific manner it does seem to work and make sense.

One slight issue I can see is that it increases the complexity of the filter 
equation, so it is necessary for the benefit of parallelisation to outweigh the 
additional complexity of the equation; With a vector of length 4 the benefit 
may be a little dubious. to take your example, if we were to execute it in 
serial

f(y)   = k * (y0)
f(y+1) = k * (y1)
f(y+2) = k * (y2)
f(y+3) = k * (y3)

 = 4 muls

f4(y) = k^5 * y4 // in parallel for y, y+1 ... y+3

 = 5 muls in parallel

Although I guess there are more load/store's in the serialised version.

Anyway I will give it a go and report back :)

Thanks
Tom


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Sebastien Metrot
Sent: 11 October 2011 16:13
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Vectorising IIR Filters

There is one thing that you can do: develop the recursive part so that instead 
of it depending on y0, it depends on y4. For example if your recursive function 
is defined as
f(y) = k * (y0)
(this is very simplified for the sake of exposing the idea, doing the same with 
a biquad is much longer).

you'd calculate
f'(y) = k * f(y0) = k * k * y1
then
f"(y) = k^3 * y2
and 
f3(y) = k^4 * y3

and finally 
f4(y) = k^5 * y4
Which is the function we are interested in.

Once you have your general formula f4(y), you have to start the vector pump by 
initializing the first four floats of your vector register with the 4 initial 
values y4, y3, y2 and y1 then you can implement the f4 function to use your 
registers as they should be (and it become straightforward if you have enough 
SIMD registers). I have done a biquad implementation of this technique but 
never took the time to finish it. It was for the 32 bit x86 which had only 
enough registers to implement the recursive part of the biquad and not the 
forward one so I had to do it in two passes. I tried to find that code in my 
archives some weeks ago but it seems I lost it in the last 5 years or so.

I'm still pretty sure it's an interesting idea to try and I plan to give it 
another shot one of these days if I have some time to dive into assembly code.

Hope this helps!

S.

-- 
Sebastien Metrot
Yasound - CTO - Cofounder
sebast...@yasound.com




On Oct 11, 2011, at 4:18 PM, Thomas Young wrote:

> Does anyone have any thoughts on vectorising IIR filters?
> 
> I can't see how an IIR filter implementation can be broken down into 
> parallelisable pieces when there is feedback involved, and I guess that more 
> broadly applies to any effect where feeding back is required. 
> 
> For example and perhaps to clarify, if we take a transfer function and put it 
> into direct form 1, then we will end up with the coefficients in our 
> denominator relying on previous outputs. Naively a vectorised implementation 
> would process n outputs concurrently, for example using a 4 float vector and 
> vector intrinsics to process the filter 4 samples at a time, however this 
> obviously does not work when each ouput requires the previous output to have 
> been already calculated.
> 
> Tom Young
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Vectorising IIR Filters

2011-10-11 Thread Thomas Young
Does anyone have any thoughts on vectorising IIR filters?

I can't see how an IIR filter implementation can be broken down into 
parallelisable pieces when there is feedback involved, and I guess that more 
broadly applies to any effect where feeding back is required. 

For example and perhaps to clarify, if we take a transfer function and put it 
into direct form 1, then we will end up with the coefficients in our 
denominator relying on previous outputs. Naively a vectorised implementation 
would process n outputs concurrently, for example using a 4 float vector and 
vector intrinsics to process the filter 4 samples at a time, however this 
obviously does not work when each ouput requires the previous output to have 
been already calculated.

Tom Young
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Noise performance of f32 iir filters

2011-10-03 Thread Thomas Young
I'm still waiting to hear the delicious details of Peter's Bacon Lettuce and 
Tomato Transform

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Dave Hoskins
Sent: 03 October 2011 17:46
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Noise performance of f32 iir filters

> Nigel Redmon wrote:
> > (not to mention ATM--geez how many meanings can that one attain?).
>
> thefreedictionary.com lists 106 meanings...
> http://acronyms.thefreedictionary.com/ATM

At The Moment, According To Me, the Advanced Testing Method for Automated 
Theorem Provers is to read the Acceptance Test Manual from the Association 
of Teachers of Mathematics at the Annual Technical Meeting where they all 
Ate Too Much!

I'll get my coat... : )
D.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Thomas Young
Interesting. I was referring to 'propper' synths really, which I wouldn't 
really group with romplers and black boxes personally, but I guess end users 
don't necessarily make the same distinction. I can definitely 

> Personally I'm not making black boxes, because to do this you can't just be a 
> programmer, you also have to be an artist

That's part of the fun of working with music & DSP though, you need a bit of a 
creative touch, I would have thought most people on this list would agree. 
Obviously making black boxes is taking it to the extreme though.

> "preset designers" [...] Probably not very well paid

Heh, well yea, that's the problem with being a programmer, you can never really 
move into a 'creative' job without suffering major pay decimation :P

Out of curiosity do you program soft or hard synths? I'd be interested to know 
if it is easy to get into professional soft synth/effect industry and if it 
pays well. 

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 14 September 2011 13:49
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] FM Synthesis

It's totally ok to make presets yourself & with a beta team, as long as the 
full editor is given to the user (so that he or others can do presets 
himself), but when it's a rompler or a black box, you really need "artists" 
to be using your private tools.

Also it's pretty hard to keep backwards compatibility during the first 90% 
of the dev of a plugin, so yes you make a lot of sounds during that process, 
but they will be lost. Well that's in my case.

Personally I'm not making black boxes, because to do this you can't just be 
a programmer, you also have to be an artist because you will be making 
hard-coded choices, so you'd better know what you're doing (I mean: a 
parameter that doesn't end up on the GUI is a choice that the programmer had 
to make. If it's a bad choice, the artist using it will have to do with it). 
I prefer to give as much power as possible, even if some features end up not 
being used.  Then for the next synth/effect I can still only implement what 
has been used in the previous stuff. But what's sure is that a lot of 
musicians don't like complex synths.



>I always suspected most of the presets just fell out of the development 
>process, you are always going to need to test that your engine sounds good 
>(synth construction is 50% science and 50% magic after all) and that surely 
>generates a whole load of sounds. Polish those up and you already have a 
>load of presets, let your QA/Interface designers have a bash while they are 
>developing as well and you have yourself a whole suite! I could be totally 
>wrong of course.
>
>> some kind of middleware guys, in-between musicians & engineers, who 
>> program presets for engines
>
> And where can I apply for this job? ;)
>
> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
> Sent: 14 September 2011 11:32
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] FM Synthesis
>
> The evolution seems to be going towards some kind of middleware guys,
> in-between musicians & engineers, who program presets for engines put in
> black box "instruments", so that the musican can get access to non-sampled
> FM presets, while not having access to FM at all.
> Well, seems to be what NI is doing as well.
>
> It already happened with samplers, in the past the musician got a sampler,
> that he could either program himself or feed with soundbanks. These days a
> musician buys a rompler, & doesn't have access to the editor (which
> obviously has to exist for the middleware guy who made the soundbank).
>
>
>
>
>>I don't think musicians mind tweaking and poking presets (what's the worst
>>that can happen?), so it's really up to the makers to provide plenty of
>>decent preset sounds. NI FM8 for example has a pretty good selection of
>>presets and is a popular FM synth these days, I rate it pretty highly
>>myself.
>>
>>
>> -Original Message-
>> From: music-dsp-boun...@music.columbia.edu
>> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
>> Sent: 14 September 2011 11:19
>> To: A discussion list for music-related DSP
>> Subject: Re: [music-dsp] FM Synthesis
>>
>> I believe FM sounds got back in fashion at the time everyone had 
>> forgotten
>> that shitty GM FM bank in Windows, when FM meant cheesy MIDI files.
>> ..but the problem with FM is that no one can program presets, it's very
>> unpredictable when you deal with more than 3 operators, and I'm not sure
>> today's musicians wanna deal with such complexity anymore. Actually, if
>> the
>> story that "most DX7s never got their patches tweaked" is true, maybe no
>> musician ever wanted to deal with such complexity.
>>
>>
>>
>>> On 13/09/2011 21:06, Theo Verelst wrote:
 Hi
>>

Re: [music-dsp] FM Synthesis

2011-09-14 Thread Thomas Young
I always suspected most of the presets just fell out of the development 
process, you are always going to need to test that your engine sounds good 
(synth construction is 50% science and 50% magic after all) and that surely 
generates a whole load of sounds. Polish those up and you already have a load 
of presets, let your QA/Interface designers have a bash while they are 
developing as well and you have yourself a whole suite! I could be totally 
wrong of course.

> some kind of middleware guys, in-between musicians & engineers, who program 
> presets for engines

And where can I apply for this job? ;)

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 14 September 2011 11:32
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] FM Synthesis

The evolution seems to be going towards some kind of middleware guys, 
in-between musicians & engineers, who program presets for engines put in 
black box "instruments", so that the musican can get access to non-sampled 
FM presets, while not having access to FM at all.
Well, seems to be what NI is doing as well.

It already happened with samplers, in the past the musician got a sampler, 
that he could either program himself or feed with soundbanks. These days a 
musician buys a rompler, & doesn't have access to the editor (which 
obviously has to exist for the middleware guy who made the soundbank).




>I don't think musicians mind tweaking and poking presets (what's the worst 
>that can happen?), so it's really up to the makers to provide plenty of 
>decent preset sounds. NI FM8 for example has a pretty good selection of 
>presets and is a popular FM synth these days, I rate it pretty highly 
>myself.
>
>
> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
> Sent: 14 September 2011 11:19
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] FM Synthesis
>
> I believe FM sounds got back in fashion at the time everyone had forgotten
> that shitty GM FM bank in Windows, when FM meant cheesy MIDI files.
> ..but the problem with FM is that no one can program presets, it's very
> unpredictable when you deal with more than 3 operators, and I'm not sure
> today's musicians wanna deal with such complexity anymore. Actually, if 
> the
> story that "most DX7s never got their patches tweaked" is true, maybe no
> musician ever wanted to deal with such complexity.
>
>
>
>> On 13/09/2011 21:06, Theo Verelst wrote:
>>> Hi
>>>
>> ..
>>> Remember that Frequency Modulation of only two operators already has
>>> theoretically (without counting sampling artifacts!) a Bessel-function
>>> guided spectrum which is in every case infinite, although at low
>>> modulation indexes the higher components are still small. Also think
>>> about the phase accuracy: single precision numbers are not good at
>>> counting more samples than a few seconds for instance.
>>>
>>
>> Not too much of a problem if you use table lookup, which is what I
>> assume the DX 7 did. Phase errors are a problem in single precision if
>> you compute and accumulate.
>>
>>
>> ..
>>> Oh, there are Open Source FM synths maybe worth looking at: a csound
>>> "script" (or what that is called there)
>>
>>  Csound has the "foscil" two-operator self-contained opcode, and of
>> course you can roll your own operator structures ad lib. Somewhere there
>> is a full DX 7 emulation complete with patches (poss in the Csound book;
>> not to hand right now).
>>
>> Have we now reached the point where FM sounds are back in fashion?
>>
>> Richard Dobson
>>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, 
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, 
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp






No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.914 / Virus Database: 271.1.1/3895 - Release Date: 09/13/11 
20:35:00

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-14 Thread Thomas Young
I don't think musicians mind tweaking and poking presets (what's the worst that 
can happen?), so it's really up to the makers to provide plenty of decent 
preset sounds. NI FM8 for example has a pretty good selection of presets and is 
a popular FM synth these days, I rate it pretty highly myself.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Didier Dambrin
Sent: 14 September 2011 11:19
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] FM Synthesis

I believe FM sounds got back in fashion at the time everyone had forgotten 
that shitty GM FM bank in Windows, when FM meant cheesy MIDI files.
..but the problem with FM is that no one can program presets, it's very 
unpredictable when you deal with more than 3 operators, and I'm not sure 
today's musicians wanna deal with such complexity anymore. Actually, if the 
story that "most DX7s never got their patches tweaked" is true, maybe no 
musician ever wanted to deal with such complexity.



> On 13/09/2011 21:06, Theo Verelst wrote:
>> Hi
>>
> ..
>> Remember that Frequency Modulation of only two operators already has
>> theoretically (without counting sampling artifacts!) a Bessel-function
>> guided spectrum which is in every case infinite, although at low
>> modulation indexes the higher components are still small. Also think
>> about the phase accuracy: single precision numbers are not good at
>> counting more samples than a few seconds for instance.
>>
>
> Not too much of a problem if you use table lookup, which is what I
> assume the DX 7 did. Phase errors are a problem in single precision if
> you compute and accumulate.
>
>
> ..
>> Oh, there are Open Source FM synths maybe worth looking at: a csound
>> "script" (or what that is called there)
>
>  Csound has the "foscil" two-operator self-contained opcode, and of
> course you can roll your own operator structures ad lib. Somewhere there
> is a full DX 7 emulation complete with patches (poss in the Csound book;
> not to hand right now).
>
> Have we now reached the point where FM sounds are back in fashion?
>
> Richard Dobson
> 
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] FM Synthesis

2011-09-12 Thread Thomas Young
Regarding:

buffer[i] = Math.sin( ( phase * ratio * frequency + buffer[ i ] ) * PI_TWO ); 
// FM (PM whatever)
phase += 1.0 / 44100.0; // advance phase in normalized space

Your oscillator will be prone to drifing out of phase due to the way you are 
adding the reciprocal of the sample rate (a recurring decimal) to a variable 
each sample.

phase += 1.0 / 44100.0; ( 0.226757369...)

This error will accumulate and become significant after a large number of 
samples, it may not be your problem here but it would still be worth using a 
different mechanism for calculating it (i.e. use (i / SampleRate))

Don't you also want "+ buffer[ i ]" after the "* PI_TWO"? I might just be 
missing something there though.

Tom Young 

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Andre Michelle
Sent: 12 September 2011 14:59
To: music-dsp@music.columbia.edu
Subject: [music-dsp] FM Synthesis

Hi all,


I just started an implementation of a FM-Synth and I am struggling with a 
certain effect, which I am not sure, if it is caused by a bug or conceptional 
error.

The structure is straight-forward. I have N operators, which can be freely 
routed including feedback. Each Oscillator has one buffer to write its 
processing result. The next operators in the processing chain can use the 
buffer as modulation input. I compare the output with NI FM8 to check if I get 
similar output. So far very close. The issue occurs, when I choose a routing 
with feedback. A > B > A, where B's output will be send to the sound card. If I 
choose a blocksize (samples each computation) of one, I get reasonable audio at 
my output (similar to sawtooth). However running a blocksize of e.g. 64 samples 
or even just more than 1 sample results in some nasty glitchy sound. Looks to 
me like a phase problem, but I am pretty sure, that my code covers all that. My 
question is: Is it even possible to let the operators process entire 
audio-blocks independently? I guess, later processing of LFOs, envelopes and 
everything each sample might not lead to readable code and could not be easily 
optimized.

my Java code of a single Operator:
http://files.andre-michelle.com/temp/OperatorOscillator.txt

For the routing above (A>B>A, out:B) the linear process sequence is A,B,Output 
(while B,A,Output would also be nice)

Any hints? Thanks!

--
André Michelle
http://www.audiotool.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Multichannel Stream Mixer

2011-08-30 Thread Thomas Young
Actually summing then normalising would deal with that situation, it's pre-gain 
adjustments which won't work there.

Signal 1:   0.0  0.1  0.8  0.9  0.5
Signal 2:   0.0  0.0  0.0  0.0  0.0
Signal 3:   0.0  0.0  0.0  0.0  0.0
Signal 4:   0.0  0.0  0.0  0.0  0.0

Pre-gain:   0.0 .025  0.20 .225 .125  ( = Sum( Signal[n] ) / 4 )
Normalised: 0.0 .111  .888 1.0  .555  ( = Sum( Signal[n] ) / Max( 
Signal[0..n][0..m] ) )


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 15:14
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

Thx for clarifying this, I'll look in this direction !
I just wanted to avoid summing, lets say I have 10 signals, 9 are silence
and one has sound, this latest will be totally inaudible if I sum and
normalize.
I'll look in compressor/limitor.

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas Young
Sent: Tuesday, August 30, 2011 3:52 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Multichannel Stream Mixer

If you want to mix your signals 'cleanly', i.e. avoid introducing as many
additional frequencies as possible, then summing is the best approach.
Performing a normalisation step at the end, or a straightforward gain
adjustment before summing (divide by the number of sources you are mixing or
whatnot) does not colour the signal because you are just changing the
amplitude across all samples.

If you don't care about a 'clean' mix you can do dynamic gain adjustments to
make the signals qualitatively more audible, the article you linked
describes a simple form of dynamic range compression, so you might do well
to research that topic:

http://en.wikipedia.org/wiki/Dynamic_range_compression

A compressor/limiter is the traditional approach to this, and there are
varieties based on spectral content, sidechaining etc... which all fall
under the umbrella of dynamic range compression.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 14:35
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

I just wonder is summing the buffers is just enough.
I read an article involving the problem, just summing up isn't enough :
http://www.vttoth.com/digimix.htm
ill have a view on this, thx anyway.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas Young
Sent: Tuesday, August 30, 2011 3:12 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Multichannel Stream Mixer

Is there something wrong with just summing the buffers for each channel?

It's not really clear what you are trying to achieve, do you want to downmix
your channels?

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 14:05
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

FramesRecorder(i,0) is a class pointer with access to "i" the sample number,
"0" is the channel.
Variable vol, is just an example to assign volume scale to the samples.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Wen Xue
Sent: Tuesday, August 30, 2011 2:18 PM
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

Is FramesRecorder(...) a macro or a function call? 
And why multiply vol=1.0 when everything is floating-point already? 

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 12:13
To: 'A discussion list for music-related DSP'
Subject: [music-dsp] Multichannel Stream Mixer

Hi, I am trying to implement a multi channel mixer in c++, very basic. I
have multiple streams and their associated buffer.
im using rtaudio and stk to work with the samples and audio devices.
I gather the buffers in a tick method, and fetch all the samples to produce
the sum :

FramesRecorder is the final output buffer. -1.0f to 1.0f
Frames[numplayers] are the stream buffers. -1.0f to 1.0f
int numplayers, the stream index.

the sines are normalized to -1.0 to 1.0 , float32.

for ( unsigned int i=0; i< nBufferFrames ; i++ ) {
Float vol = 1.0f;
FramesRecorder(i,0) = ( FramesRecorder(i,0) + (
Frames[numplayers](i,0) * vol ))  ; 
FramesRecorder(i,1) = ( FramesRecorder(i,1) + (
Frames[numplayers](i,1) * vol ))  ;  

Re: [music-dsp] Multichannel Stream Mixer

2011-08-30 Thread Thomas Young
If you want to mix your signals 'cleanly', i.e. avoid introducing as many 
additional frequencies as possible, then summing is the best approach. 
Performing a normalisation step at the end, or a straightforward gain 
adjustment before summing (divide by the number of sources you are mixing or 
whatnot) does not colour the signal because you are just changing the amplitude 
across all samples.

If you don't care about a 'clean' mix you can do dynamic gain adjustments to 
make the signals qualitatively more audible, the article you linked describes a 
simple form of dynamic range compression, so you might do well to research that 
topic:

http://en.wikipedia.org/wiki/Dynamic_range_compression

A compressor/limiter is the traditional approach to this, and there are 
varieties based on spectral content, sidechaining etc... which all fall under 
the umbrella of dynamic range compression.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 14:35
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

I just wonder is summing the buffers is just enough.
I read an article involving the problem, just summing up isn't enough :
http://www.vttoth.com/digimix.htm
ill have a view on this, thx anyway.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Thomas Young
Sent: Tuesday, August 30, 2011 3:12 PM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Multichannel Stream Mixer

Is there something wrong with just summing the buffers for each channel?

It's not really clear what you are trying to achieve, do you want to downmix
your channels?

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 14:05
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

FramesRecorder(i,0) is a class pointer with access to "i" the sample number,
"0" is the channel.
Variable vol, is just an example to assign volume scale to the samples.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Wen Xue
Sent: Tuesday, August 30, 2011 2:18 PM
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

Is FramesRecorder(...) a macro or a function call? 
And why multiply vol=1.0 when everything is floating-point already? 

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 12:13
To: 'A discussion list for music-related DSP'
Subject: [music-dsp] Multichannel Stream Mixer

Hi, I am trying to implement a multi channel mixer in c++, very basic. I
have multiple streams and their associated buffer.
im using rtaudio and stk to work with the samples and audio devices.
I gather the buffers in a tick method, and fetch all the samples to produce
the sum :

FramesRecorder is the final output buffer. -1.0f to 1.0f
Frames[numplayers] are the stream buffers. -1.0f to 1.0f
int numplayers, the stream index.

the sines are normalized to -1.0 to 1.0 , float32.

for ( unsigned int i=0; i< nBufferFrames ; i++ ) {
Float vol = 1.0f;
FramesRecorder(i,0) = ( FramesRecorder(i,0) + (
Frames[numplayers](i,0) * vol ))  ; 
FramesRecorder(i,1) = ( FramesRecorder(i,1) + (
Frames[numplayers](i,1) * vol ))  ; 
}
// and something like giving the sum of streams.
FramesRecorder[0] /= numplayers;
FramesRecorder[1] /= numplayers; 


what would be the best method to mix multi channel buffers together ? I know
it's a newbie question, but I would like to know about your experience with
this. I would like to implement an efficient method as well. Thx !




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1392 / Virus Database: 1520/3867 - Release Date: 08/30/11

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- 

Re: [music-dsp] Multichannel Stream Mixer

2011-08-30 Thread Thomas Young
Is there something wrong with just summing the buffers for each channel?

It's not really clear what you are trying to achieve, do you want to downmix 
your channels?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 14:05
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

FramesRecorder(i,0) is a class pointer with access to "i" the sample number,
"0" is the channel.
Variable vol, is just an example to assign volume scale to the samples.


-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Wen Xue
Sent: Tuesday, August 30, 2011 2:18 PM
To: 'A discussion list for music-related DSP'
Subject: Re: [music-dsp] Multichannel Stream Mixer

Is FramesRecorder(...) a macro or a function call? 
And why multiply vol=1.0 when everything is floating-point already? 

-Original Message-
From: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of StephaneKyles
Sent: 30 August 2011 12:13
To: 'A discussion list for music-related DSP'
Subject: [music-dsp] Multichannel Stream Mixer

Hi, I am trying to implement a multi channel mixer in c++, very basic. I
have multiple streams and their associated buffer.
im using rtaudio and stk to work with the samples and audio devices.
I gather the buffers in a tick method, and fetch all the samples to produce
the sum :

FramesRecorder is the final output buffer. -1.0f to 1.0f
Frames[numplayers] are the stream buffers. -1.0f to 1.0f
int numplayers, the stream index.

the sines are normalized to -1.0 to 1.0 , float32.

for ( unsigned int i=0; i< nBufferFrames ; i++ ) {
Float vol = 1.0f;
FramesRecorder(i,0) = ( FramesRecorder(i,0) + (
Frames[numplayers](i,0) * vol ))  ; 
FramesRecorder(i,1) = ( FramesRecorder(i,1) + (
Frames[numplayers](i,1) * vol ))  ; 
}
// and something like giving the sum of streams.
FramesRecorder[0] /= numplayers;
FramesRecorder[1] /= numplayers; 


what would be the best method to mix multi channel buffers together ? I know
it's a newbie question, but I would like to know about your experience with
this. I would like to implement an efficient method as well. Thx !




--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1392 / Virus Database: 1520/3867 - Release Date: 08/30/11

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Electrical Engineering Foundations

2011-08-30 Thread Thomas Young
If the goal here is education then I would suggest formatting your work as a 
syllabus or structured as lessons with exercises. The raw information is 
already available through books and sites like Wikipedia (where the wording and 
presentation is already carefully reviewed), so I would have thought the focus 
would be on presenting the reader with the information in what you consider to 
be the 'correct' order, from the fundamentals upward.


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 27 August 2011 23:11
To: music-dsp@music.columbia.edu
Subject: Re: [music-dsp] Electrical Engineering Foundations

I've made a small beginning at  http://www.theover.org/Dsp .

Feel free to comment/request/fire question, etc, in fact I don't even 
mind making it a Wiki page (to let other contribute), and I don't know 
yet how many linked pages and example materials I'll make.

Theo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Electrical Engineering Foundations

2011-08-24 Thread Thomas Young
What resources would you recommend Theo?

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Theo Verelst
Sent: 24 August 2011 17:01
To: music-dsp@music.columbia.edu
Subject: [music-dsp] Electrical Engineering Foundations

Just to maybe put some people at ease and hopefully arousing some 
discussions, I'd like to point the attention of a lot of people in the 
DSP corners of recreation and science, hobby and serious research to the 
general foundations for Sampling Theory and Digital Signal Processing 
and possibly information theory and filter knowledge.

Of course there is little against a lot of gents (and possibly gals) 
plowing around in the general subject of "doing something with audio and 
a computer". It appears however that for the reasonably reasons such as 
required IQ and education (or for fun) but also for less reasonable 
reasons such as power pyramids and greed, the theoretical foundations of 
even the simplest thing of all, playing a CD or soundfile through a DA 
converter for many remain rather shaky or plain wrong or simply absent.

For analog electronics filters and such are well known subjects 
applicable from the simple to the quite complex and the electronics 
world generally tends to have a solid foundation for dealing with the 
issues and theories involved, though not necessarily all electronicists 
are on par with the required network theoretical foundations, I've 
understood MNA methods in the US are even taught at high school level.

For EEs (electrical engineers) it will usually be the case they've heard 
about sampling theory somewhere in their education, and idem ditto for 
some filter or circuits and systems type of theory. Many others, like in 
IT (informatics), even physics and other disciplines or people making 
their hobby a profession this might all just as well be abacadabra, and 
it seems to me this should change.

Regards,

Theo Verelst
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Heureka or hype?

2011-04-06 Thread Thomas Young
I strongly suspect some impartial listening tests would shoot their analogue 
sampling theory down in flames

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Diemo Schwarz
Sent: 06 April 2011 15:10
To: r...@ircam.fr; A discussion list for music-related DSP
Subject: [music-dsp] Heureka or hype?


"Common digital specifications are 24 bit/96 kHz. 24 bits provide enough 
dynamic 
resolution, but 96 kHz is far from being sufficient when it comes to time 
resolution: our hearing capabilities would require sample rates of around 500 
kHz. Therefore, those small signal structures carrying spatial and location 
information are lost in digital mixing ― analog equipment clearly exceeds these 
demands."

 From marketing announcement of SPL's 120 Volts analog mixer
http://spl.info/index.php?id=1550&L=1

...Diemo


-- 
Diemo Schwarz, PhD -- http://diemo.concatenative.net
Real-Time Music Interaction Team -- http://imtr.ircam.fr
IRCAM - Centre Pompidou -- 1, place Igor-Stravinsky, 75004 Paris, France
Phone +33-1-4478-4879 -- Fax +33-1-4478-1540
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] New patent application on uniformly partitioned convolution

2011-01-28 Thread Thomas Young
In principle a patent protects the investment that a company makes in order to 
develop a new technology; companies are unlikely to invest large amounts on 
research if their ideas are going to be copied and their product beaten to 
market by an opportunistic competitor.

In practice, as we all know, things like software patents are just food for 
patent trolls.

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Victor Lazzarini
Sent: 28 January 2011 18:07
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] New patent application on uniformly partitioned 
convolution

I would have thought that the whole point of a patent is to make  
money. A scientific paper, IMHO, is the way to move the field forward.

Victor
On 28 Jan 2011, at 18:02, Dave Hoskins wrote:

> The whole point of a Patent is to help engineers move forward, so  
> it's completely legitimate to take a previous invention and add to  
> it, to make a new Patent.
> It's a shame they are not seen like this at all.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] "Factorization" of filter kernels

2011-01-19 Thread Thomas Young
I see, most people want *more* parallelisation in their algorithms, not less ;)

Perhaps you could make a 'guess' at the first filter and then solve the problem 
of finding a second filter which gives you the desired result. Since:

f1*f2=result

...we know f1 & f2 and so can find 'result', and...

f1*fa*fb=result

so by guessing at fa (for example, just taking the first n coefficients) the 
problem is simplified to finding an fb for which the above equation is true.

Since we know that convolution is multiplication in the frequency domain, and 
assuming we were using fast fourier transforms, the coefficients would be 
calculated by:

IFFT( FFT('result') / FFT( f1*fa) )

I.e. what would we multiply f1*fa by in the frequency domain (aka convolve with 
it) in order to achieve 'result'.

Feel free to point out any big holes in my logic :I

Tom

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Uli Brueggemann
Sent: 19 January 2011 16:30
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] "Factorization" of filter kernels

Thomas,

I suppose that a decomposition of a n-taps kernel into n zero-padded
kernels would directly lead to the basics of the convolution algorithm
:-)
But your proposal also introduces a parallel computation, where the
results have to be offset and added (incl. overlap treatment).
My question is aiming a serial computation, like f1*f2 = f1*fa*fb with
f2=fa*fb. Again: f1 is given and fa, fb are searched.

Greetings, Uli

On Wed, Jan 19, 2011 at 5:07 PM, Thomas Young
 wrote:
> Hi Uli
>
> I don't know if this will be useful for your situation, but a simple method 
> for decomposing your kernel is to simply chop it in two. So for a kernel:
>
> 1 2 3 4 5 6 7 8
>
> You can decompose it into two zero padded kernels:
>
> 1 2 3 4 0 0 0 0
>
> 0 0 0 0 5 6 7 8
>
> And sum the results of convolving both of these kernels with your signal to 
> achieve the same effect as convolving with the original kernel. You can do 
> this because convolution is distributive over addition, i.e.
>
> f1*(f2+f3) = f1*f2 + f1*f3
>
> For signals f1,f2 & f3 (* meaning convolve rather than multiply).
>
> Obviously all those zero's do not need to be evaluated, meaning the problem 
> is changed to one of offsetting your convolution algorithm (which may or may 
> not be practical in your situation), but does allow you to use half the 
> number of coefficients.
>
> Thomas Young
>
> Core Technology Programmer
> Rebellion Developments LTD
>
> -Original Message-
> From: music-dsp-boun...@music.columbia.edu 
> [mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Uli Brueggemann
> Sent: 19 January 2011 14:56
> To: A discussion list for music-related DSP
> Subject: Re: [music-dsp] "Factorization" of filter kernels
>
> Hi,
>
> thanks for the answer so far.
> A polyphase filter is a nice idea but it does not answer the problem.
> The signal has to be demultiplexed (decimated), the different streams
> have to be filtered, the results must be added to get the final output
> signal.
>
> My question has a different target.
> Imagine you have two system (e.g. some convolution  boards with DSP).
> Each system can just run a 512 tap filter. Now I like to connect the
> two systems in series to mimic a desired 1024 tap filter. The 1024
> kernel is known and shall be generated by the two 512 tap filters.
> So what's a best way to decompose the known kernel into two parts ? Is
> there any method described somewhere?
>
> Uli
>
>
> 2011/1/19 João Felipe Santos :
>> Hello,
>>
>> a technique that allows something similar to what you are suggesting
>> is to use polyphase filters. The difference is that you will not
>> process contiguous vectors, but (for a 2-phase decomposition example)
>> process the even samples with one stage of the filter and the odd
>> samples with another stage. It is generally used for multirate filter
>> design, but it makes sense to use this kind of decomposition if you
>> can process the stages in parallel... or at least it is what I think
>> makes sense.
>>
>> You can search for references to this technique here [1] and here [2].
>> A full section on how to perform the decomposition is presented on
>> "Digital Signal Processing: a Computer-based approach" by Sanjit K.
>> Mitra.
>>
>> [1] 
>> http://www.ws.binghamton.edu/fowler/fowler%20personal%20page/EE521_files/IV-05%20Polyphase%20FIlters%20Revised.pdf
>> [2] https://ccrma.stanford.edu/~jos/sasp/Multirate_Filter_Banks.html
>>
>> --
>> João Felipe Santos
>>
>

Re: [music-dsp] "Factorization" of filter kernels

2011-01-19 Thread Thomas Young
Hi Uli

I don't know if this will be useful for your situation, but a simple method for 
decomposing your kernel is to simply chop it in two. So for a kernel:

1 2 3 4 5 6 7 8

You can decompose it into two zero padded kernels:

1 2 3 4 0 0 0 0 

0 0 0 0 5 6 7 8

And sum the results of convolving both of these kernels with your signal to 
achieve the same effect as convolving with the original kernel. You can do this 
because convolution is distributive over addition, i.e.

f1*(f2+f3) = f1*f2 + f1*f3

For signals f1,f2 & f3 (* meaning convolve rather than multiply). 

Obviously all those zero's do not need to be evaluated, meaning the problem is 
changed to one of offsetting your convolution algorithm (which may or may not 
be practical in your situation), but does allow you to use half the number of 
coefficients.

Thomas Young

Core Technology Programmer
Rebellion Developments LTD

-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Uli Brueggemann
Sent: 19 January 2011 14:56
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] "Factorization" of filter kernels

Hi,

thanks for the answer so far.
A polyphase filter is a nice idea but it does not answer the problem.
The signal has to be demultiplexed (decimated), the different streams
have to be filtered, the results must be added to get the final output
signal.

My question has a different target.
Imagine you have two system (e.g. some convolution  boards with DSP).
Each system can just run a 512 tap filter. Now I like to connect the
two systems in series to mimic a desired 1024 tap filter. The 1024
kernel is known and shall be generated by the two 512 tap filters.
So what's a best way to decompose the known kernel into two parts ? Is
there any method described somewhere?

Uli


2011/1/19 João Felipe Santos :
> Hello,
>
> a technique that allows something similar to what you are suggesting
> is to use polyphase filters. The difference is that you will not
> process contiguous vectors, but (for a 2-phase decomposition example)
> process the even samples with one stage of the filter and the odd
> samples with another stage. It is generally used for multirate filter
> design, but it makes sense to use this kind of decomposition if you
> can process the stages in parallel... or at least it is what I think
> makes sense.
>
> You can search for references to this technique here [1] and here [2].
> A full section on how to perform the decomposition is presented on
> "Digital Signal Processing: a Computer-based approach" by Sanjit K.
> Mitra.
>
> [1] 
> http://www.ws.binghamton.edu/fowler/fowler%20personal%20page/EE521_files/IV-05%20Polyphase%20FIlters%20Revised.pdf
> [2] https://ccrma.stanford.edu/~jos/sasp/Multirate_Filter_Banks.html
>
> --
> João Felipe Santos
>
>
>
> On Tue, Jan 18, 2011 at 5:46 AM, Uli Brueggemann
>  wrote:
>> Hi,
>>
>> a convolution of two vectors with length size n and m gives a result
>> of length n+m-1.
>> So e.g. two vectors of length 512 with result in a vector of length 1023.
>>
>> Now let's assume we have a vector (or signal or filter kernel) of size
>> 1024, the last taps is 0.
>> How to decompose it to two vectors of half length? The smaller vectors
>> can be of any arbitrary contents but their convolution must result
>> must be equal to the original vector.
>>
>> It would be even interesting to "factorize"  given kernel into n
>> smaller kernels. Again the smaller kernels may have any arbitrary but
>> senseful contents, they can be identical but this is not a must.
>>
>> Is there a good method to carry out the kernel decomposition? (e.g.
>> like calculating n identical factors x of a number y by x =
>> Exp(Log(y)/n) with x^n = x*x*...*x = y)
>>
>> Uli
>> --
>> dupswapdrop -- the music-dsp mailing list and website:
>> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
>> links
>> http://music.columbia.edu/cmc/music-dsp
>> http://music.columbia.edu/mailman/listinfo/music-dsp
>>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp 
> links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] Approaches to multiple band EQ

2011-01-11 Thread Thomas Young
Hi all

I need to develop a real-time multiple band EQ DSP effect, but I am unsure 
about how to approach it. 

My preferred approach would be to FFT-> Modify Spectrum-> IFFT, however I think 
that will end up being too slow (or at least using up far more processing power 
than I would like.) The only other approach I can think of is a number of IIR 
band stop filters in series, would this be practical? I am concerned that there 
would be some negative interaction between the filters, or some unpredictable 
results due to different (non linear) phase responses of the filters. It's 
important that the DSP introduces minimal distortion and is acoustically 
transparent when 'flat'.

Information about any other common approaches to multiple band EQ's would be 
helpful too.

Thanks

Thomas Young

Core Technology Programmer
Rebellion Developments LTD

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] What's the best way to render waveforms with accuarcy

2010-12-21 Thread Thomas Young
Hi Balletrino (and everyone, first post!)

There are a few tricks to waveform rendering which I have used:

1) Thread out the loading

Create at least one thread for each file you are loading, or for each source 
you are sampling. This will make the biggest difference to the speed at which 
you can parse and display the PCM data. This also allows you to perform 
background loading, in the event that you want to load a rough approximation of 
the waveform first and increase the accuracy as you 'zoom in'.

On a modern PC I have found it is possible to parse hundreds of megabytes of 
PCM data in a few seconds, provided you just want something rough for an 
initial render.

2) Keep everything in memory if possible

Raw PCM data can be large, but if you are developing on a PC you should have 
plenty of main memory to use. If you have everything in memory then the only 
problem you have is creating a decent re-sampling algorithm to display your 
large number of samples in a relatively small number of pixels.

> Rendering the waveform of a 5mn song on 600 or 1000 pixels wide is not  
> a probem, since the precision hasn't to be that huge. But the main  
> problem is when you come to zoom it, it has to be really precise.

Assuming you have the whole waveform in memory (see above, if not just load it 
in at the point the zoom happens)... it helps to use two rendering algorithms 
when dealing with this problem, one for when (pixels > samples shown) and one 
for when (samples shown > pixels). When you have more samples (than pixels) 
each horizontal pixel column will represent multiple samples, you will simply 
need a resampling algorithm to determine the max and min for that column. When 
you have more pixels (than samples) you will need to actually calculate where 
each sample should fall by interpolating between your min and max time, and 
render lines between them.

Hope that Helps

Thomas Young

Rebellion Developments LTD | www.rebellion.co.uk


-Original Message-
From: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] On Behalf Of Valentin Ballestrino
Sent: 21 December 2010 13:09
To: A discussion list for music-related DSP
Subject: [music-dsp] What's the best way to render waveforms with accuarcy

Hi everybody,

I'm projecting to develop a simple DAW, without much of signal  
processing but I came across the problem of waveform rendering.
I know that there are some methods to achieve it efficiently, but I  
can't see how it could be done without processing permanently big  
files containing the data.

As a begginer in DSP I can't figure out how to preserve the accuarcy  
of the waveforms without having all the samples preserved.
Rendering the waveform of a 5mn song on 600 or 1000 pixels wide is not  
a probem, since the precision hasn't to be that huge. But the main  
problem is when you come to zoom it, it has to be really precise.

When I look to main DAWs, I can see that the process of generating the  
waveform takes quite a lot of time when you've got several tracks.
Maybe the whole thing resides in the rendering engine, maybe in the  
signal pre-processing, I don't know at all.

I hope that my question doesn't seem too much off-topic.

Thanks a lot


V.Balletrino
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp