I've thought about it too. When you compose notes and then apply instruments
you get sound and not notes of sound or event it is notes of sound it is
nice
to be able to convert it to sound and reuse it in another instrument. Here
two layer model of orchestra + score is restricting. Sometimes you need
to convert notes of sound to sound and use it in another instrument.
I've installed Euterpea and there is a function in module
"Euterpea.Audio.Render"
renderSF :: (Clock p, Performance a, AudioSample b)
=> Music a -> InstrMap (Signal p () b) -> (Double, Signal p () b)
which can do it.
2011/2/18 Evan Laforge
> > Then I tried Modula-3 on Linux. When I later got to know Haskell, I found
> > that I had reinvented lazy evaluation for Assampler. Consequently I moved
> to
> > the original. I wanted to integrate music composition and signal
> processing.
> > I wanted programming features for music arrangement, since the many
> trackers
> > known from Amiga did not offer much structuring and thus required a lot
> of
> > copy and paste. I wanted programming features also for signal processing,
> > since the interactive graph editing became cumbersome for repetitive
> signal
> > algorithms like vocoders, although I already added some support for them
> to
> > Assampler.
>
> Coincidentally, my ambition was originally also to integrate music
> composition and signal processing. That is, I would like to be able
> write both 'echo phrase' to play phrase with a note-by-note echo and
> 'reverb phrase' which would play phrase but apply a sample level
> reverb to it. With most current systems you have to fiddle around
> with setting up a separate reverb, setting up its control inputs,
> manually hooking your score language's knobs up to the reverb's knobs,
> etc. And then there's 'retrograde phrase' vs. 'reverse phrase' to
> apply music-level and audio-level reverse respectively... most
> existing systems force you to do an awkward two step process where you
> record the output of phrase and then re-input it as a sample, then
> reverse it.
>
> It's not just a pure academic interest either... it's musically useful
> to e.g. tune a comb filter to a musical pitch, or apply a special kind
> of reverb to a single note, and it's a hassle to manually set up all
> the plumbing to get that to happen.
>
> To my eyes the problem is in the score vs. orchestra division that
> starts with music-n languages like csound and goes all the way through
> midi sequencers. Nyquist is the only language I know of that tried to
> tackle that.
>
> However, I've basically given up on that for the moment in favor of
> just generating MIDI. Just composition is already really complicated
> without throwing signal processing into the mix. So I wish you best
> of luck on the signal side, maybe when things on both sides mature I
> can steal^H^H^H integrate some of that work and finally have the
> top-to-bottom solution I dreamed of...
>
> Coincidentally, I also got my start on the Amiga... perhaps early
> exposure to trackers let to my dissatisfaction with MIDI and the
> typical MIDI sequencer :) My current project winds up looking vaguely
> like a programmable tracker.
>
> > liveliness. The typical memory leak works as follows:
> > let (prefix, suffix) = splitAt largeNumber xs
> > in processA prefix ++ processB suffix
> > Although this can be perfectly processed in a streaming manner,
> sometimes
> > GHC does not manage to release the pointer to the beginning of prefix and
> > thus prefix is kept until the processing of suffix starts. I wonder
> whether
>
> Just out of curiosity, how do you find out when this is happening?
> ___
> haskell-art mailing list
> haskell-art@lurk.org
> http://lists.lurk.org/mailman/listinfo/haskell-art
>
___
haskell-art mailing list
haskell-art@lurk.org
http://lists.lurk.org/mailman/listinfo/haskell-art