> imagine it's two-dimensional vector synthesis like a Prophet VS. one
> dimension is some other timbre parameter with a minimum and a maximum
> (no wrap around).
>
> so, in the other dimension, imagine having say, 6 identical wavetables
> except the 2nd harmonic is offset by 60 degrees in phase
Thanks for your friendly comments, Robert!
First of all, I probably need to clarify that I'm not trying to make a
traditional synth application out of these wavetables, but I will use
some of them in my own compositions in one way or another.
>
> 1. "The wavetables are written to an array expect
Den 2018-03-10 kl. 16:33, skrev Frank Sheeran:
>
> What I notice in so many of the existing tools in this niche is that
> they all let you "draw your own waveform!!" as if that's something
> you'd actually want to do. It always seemed obvious to me that at
> least drawing the harmonic spect
Yes, there are lots of interesting things that can be done with
frequency shifting. Feedback suppression in a PA system by frequency
shifting was suggested by Manfred Schroeder a long time ago. I have
occasionally found it to be useful to broaden a mono signal by feeding
it through a hilbert transf
On February 22, 2016 at 12:13:39 pm +01:00, Corey K wrote:
> I don't have any links on the use of autocorrelation in this context, and I
> don't even know if it would work. My basic thought, however, was that the
> autocorrelation of white noise should be zero at all time lags other than 0.
zero crossing rate. I can think of applications such as perceptual research
where their differences matter a great deal, and other applications where you
would just pick the descriptor that is most mathematically elegant or easy to
implement.
Risto Holopainen
_
ed windows.
Another simple way if you do an FFT would be to accumulate the amplitude of
successive bins, counting from 0 Hz upwards as well as from f_s/2 downwards,
stopping at the bin where the summed amplitudes match.
And welcome to the list!
Risto Holopainen
_
l entropy whereas white
> noise has maximal entropy, so it's useful for distinguishing pitched and
> noisy signals.
Risto Holopainen
___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
n a
> computer crash this summer!
> And, as Andy suggests, if music broadly defined is 'organized sound',
> then there might always be a way of converting some area of
> mathematics into something musical, that someone might appreciate as
> art.
>
> M.
>
&
tected if you add a
little bit of noise, more or less the same way as happens with dither. I
haven't seen this paper by Adams, but maybe it relates to stochastic resonance
too.
Risto Holopainen
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source cod
t too.
Risto Holopainen
10 juli 2014, Rohit Agarwal skrev:
>
>
>
> If I was to model music in general, it would be a sequence of 2 type
> segments, non-stationary transitions would be the first and quasi
> stationary tones the second type. This would get more involved with
once
complaining about repetition in the noise generator of Csound. I think they
used a random generator with period 2^16 in those days, but it's been improved
now.
Risto Holopainen
--
dupswapdrop -- the music-dsp mailing list and website:
subscri
://ristoid.net/sndex/softsync_amount_sweep.wav
When it comes to programming hard sync, I would use oversampling. I'm not
saying that you should, I'm just lazy enough to do it the easy way.
Risto Holopainen
>
> I have an almost embarrassing question on this subject. Having lead far
doing what I am used to do (I'm
using csound and C++).
Have you had a look at other text based synthesis environments such as ChucK or
SuperCollider? I think the latter at least has a thriving user community.
Risto Holopainen
--
dupswapdrop -
hear from anyone who is better informed on this.
Risto Holopainen
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http:/
I haven't worked much on this, so just some speculations.
> i'd sorta like to keep this in the time-domain if possible so that it
> might be able to operate real-time, even with a small delay. but the
> delay in doing an STFT seems too long.
A delay the size of a few periods of the signal seems
Hello Marcelo,
One thing that is definitely worth including is an example of the illusion
that you may think you are able to distinguish two slightly different
sounds (or even identical ones). I have often fooled myself, and seen
others suffer from this mistake. Although this is not an auditory
i
been fully explained yet.
Risto Holopainen
(*) Legge and Fletcher: Nonlinearity, chaos, and the sound of shallow
gongs. JASA 86(6), 1989.
> Hi Everyone,
>
> I have a question which in a broad sense relates to physical modelling
> and acoustics:
>
> Under what circumst
another.
Risto Holopainen
Department of Musicology
University of Oslo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music
I must say I'm happy to have learnt the rudiments of Csound in 1996 and
not today. The problem is not so much its shortcomings (I partly agree
with Ross on them) but its "longcoming" list of opcodes and ever-expanding
list of new features. Searching the manual for the correct name for a
one-pole f
t arXiv by Heikkilä that explains it.
Risto Holopainen
Department of Musicology
University of Oslo
> It's nice to see some familiar names in Csound's defense.
>
> Here's something I've considered since learning C: has anyone
> (attempted to) compose music in straight
g to get involved in
that -- good luck to anyone who does.
Risto Holopainen
Institutt for musikkvitenskap
Universitetet i Oslo
> I do not think that Wikipedia is a bad idea. The problem is that
> everybody can contribute and people tend to get into arguments and
> editing wars like they
So, what you have mistakenly implemented is probably feedback FM with pure
delay, or something like
x[n] = sin(phi + bx[n-D])
phi += w,
where D is your buffer size. Although it usually doesn't sound very good,
I nevertheless find it interesting by being a discrete time version of a
delay differe
enjoy participating!
You will find the Song Contest here:
https://nettskjema.uio.no/answer.html?fid=46502&lang=en
It's open until April, 30.
Risto Holopainen
Department of Musicology
University of Oslo
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FA
24 matches
Mail list logo