[haskell-art] Live Performing With Haskell
Hi all, I'd like to get into live coding with Haskell. Can you recommend a good enviroment? I run Ubuntu Linux. I've looked into Haskore withn Supercollider but it doesn't seem suited to live performance like something like Chuck or Impromptu. Is there something like this in the Haskell space? Thanks! -deech ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
Re: [haskell-art] data structure for music (Was: Haskell art?)
[ moving this over from the other thread ] > For other examples a hierarchical structure is exactly the right thing: Right, I was assuming a hierarchical score for the reasons you give, among others. I think I'm agreeing with you :) >> I'm not totally convinced the integration is valuable, but seeing as >> almost all other systems don't have it, it seems interesting to >> experiment with one that does and see where it leads. Maybe another >> way of putting it is that different interpretations of abstract >> instructions like legato are not necessarily always along instrument >> (piano vs. violin) lines: it may vary from phrase to phrase, or >> section to section. > > I think the hierarchical music structure has its value in being able to > be converted to a lot of back-ends. The Performance structure is good > for MIDI and Csound. The hierarchical structure is better for > SuperCollider and pure Haskell signal processing, because of such > effects like filter sweeps, speed variation at signal level, or > reversing parts of the music. The hierarchical structure can be simply > converted to Performance. I would have thought that the hierarchical > structure is also better for music notation, but the actual > implementations show, that it is not. I'm still making a go at hierarchical, though I imagine music will generally be flatter than most code there is still structure in there. What I was saying about about phrase by phrase or section by section is certainly a hierarchical concept. However, the parameterization works differently. I think it is more likely to be specific and ad hoc, which doesn't work with the programming techniques of making the changing parts into function arguments, or some other "template with holes" kind of setup. For example, something repeats 3 times, but each time has some differences in a few places. The first time has a transition from the previous section, the middle one has an accent in the middle, the last one has a transition to the next section. Or maybe the differences are textural with a different figuration, or just in "performance" such as a slower arpeggio. Structurally each repeat is the same, because the differences are textural. But if the texture is through-composed in the same way as the structure, how do you capture the structure? Separating a Player from the Score is one way, but potentially clumsy unless you have some clever way to communicate with the Player in the score. And it's probably limited to the "arpeggio speed" level of parameterization, but there's really a smooth continuum of variation all the way until the structural similarity is so abstract it may as well be a figment of the composers imagination (but expressing that is easy: write a comment). ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
Re: [haskell-art] data structure for music (Was: Haskell art?)
On Fri, Mar 11, 2011 at 10:56 AM, Anton Kholomiov wrote: > I think deep reverse example doesn't break music-signal barrier. > For music structure you can make function > > reverseMusic :: Music a -> Music a > > And if you are going to reverse signals you are going to write function for > signals > > reverseSignal :: Signal -> Signal > > and then if 'Music' is 'Functor' > > reverseDeep :: Music Signal -> Music Signal > reverseDeep = fmap reverseSignal . reverseMusic Well, it's hard to say without knowing how Music and Signal combine. If I assume there is some "instrument" abstraction that marks the boundary, then you can use this to express a score with reversed instruments. What if you want to reverse a phrase? And how would this work with reverb, which is usually applied to phrases or entire scores? ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
Re: [haskell-art] data structure for music (Was: Haskell art?)
On Fri, Mar 11, 2011 at 8:47 AM, Henning Thielemann wrote: > > On Fri, 11 Mar 2011, Stephen Tetley wrote: > >> I still don't understand what Evan's reverse instrument models. >> >> Is it reversing the sound of a note so it is some function wrapping a >> unit generator? >> >> Or is it reversing a sequence of notes according to pitch? > > I think he means reversing the signal generated by a part of the music, > since this is a classical example of breaking the music-signal barrier. Oh, sorry, I assumed your question was answered. In nyquist, which doesn't have a score/orchestra division, you can write something like this: (defun inst (pitch) (osc ... pitch)) ; osc is an oscillator unit generator (defun inst2 (pitch) (reverse (inst pitch))) (defun phrase () (seq (inst a) (inst2 b) (inst c))) (defun score () (seq (phrase) (reverb (phrase))) (play (transpose 5 (score)) Notice that you can interleave signal operations (reverse and reverb) with musical operations (e.g. transpose, which does not modify the signal but winds up asking the oscillators to produce a different pitch). In a divided world, you typically build a static signal graph and then control it afterwards. So signal level operations must be strictly divided. This means there is a fixed order for the signal operations (e.g. many instruments -> flange -> reverb) and if you want to do it differently (e.g. reverb only on certain notes, or reverb->flange for a certain phrase), then you are out of luck unless you make a whole new signal graph, one for every possible permutation in the score, and then use a different instrument for that section. Since polyphonic instruments must have a dynamic signal graph for new notes, there is usually a special case hack where there's some kind of poly-mix ugen that will duplicate the graph above it for each incoming note, but it really is a special case hack and otherwise the graph is static. And of course acausal operations like reverse are available at the score level (reverse notes) but not on signals since the orchestra has a single global implicit now. In contrast, programming languages have a dynamically changing call graph. In nyquist there is no difference between a unit generator, a note, an instrument, a phrase, or whatever you choose to define, since they are all functions that generate sound. This means that synthesized signals can be used freely in the score, e.g. not just reversed notes but instrument envelopes need not be hardcoded into the instrument, or a low pass filter can be used to smooth a tempo or pitch curve. Separate score and orchestra is how acoustic music works (a roomful of musicians is very much a static call graph) and computer music has adopted that metaphor. Back when computers were slow it made a lot of sense to divide expensive signal computation on to a hard to program but fast dsp, and the sequencer on a easier to program but slow cpu, but I think those days have been mostly over for a while now. Yet still the metaphor persists, perhaps because that's what people are used to. Well, it's also true that there is significant value in a score as a data structure, which can be transformed at that high level. There's definitely a tradeoff between code and data. ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
Re: [haskell-art] data structure for music (Was: Haskell art?)
I think deep reverse example doesn't break music-signal barrier. For music structure you can make function reverseMusic :: Music a -> Music a And if you are going to reverse signals you are going to write function for signals reverseSignal :: Signal -> Signal and then if 'Music' is 'Functor' reverseDeep :: Music Signal -> Music Signal reverseDeep = fmap reverseSignal . reverseMusic Anton 2011/3/11 Henning Thielemann > > On Fri, 11 Mar 2011, Stephen Tetley wrote: > > I still don't understand what Evan's reverse instrument models. >> >> Is it reversing the sound of a note so it is some function wrapping a >> unit generator? >> >> Or is it reversing a sequence of notes according to pitch? >> > > I think he means reversing the signal generated by a part of the music, > since this is a classical example of breaking the music-signal barrier. > ___ > haskell-art mailing list > haskell-art@lurk.org > http://lists.lurk.org/mailman/listinfo/haskell-art > ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
[haskell-art] data structure for music (Was: Haskell art?)
On Fri, 11 Mar 2011, Stephen Tetley wrote: I still don't understand what Evan's reverse instrument models. Is it reversing the sound of a note so it is some function wrapping a unit generator? Or is it reversing a sequence of notes according to pitch? I think he means reversing the signal generated by a part of the music, since this is a classical example of breaking the music-signal barrier. ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
Re: [haskell-art] Haskell art?
On 11 March 2011 08:58, Henning Thielemann wrote: > [SNIP] I would have thought that the hierarchical > structure is also better for music notation, but the actual > implementations show, that it is not. Haskore's structure unfortunately maps badly to LilyPond or ABC in a few ways: Systems really need to be separate from the beginning - so with Haskore you would have to do a traversal extracting each instrument from the tree. As you can change instrument then change back this is not so simple. Chords have discrete syntax - you can extract them from Haskore, but again its simply easier to have them from the beginning. Overlays - the same instrument playing simultaneous notes of different durations (e.g. on piano holding down the thumb on a note but moving the fingers through an arpeggio) need synch-ing to bar lines. If the overlay isn't sounding for the full bar it needs "spacer" rests - not-printed rests - to synchronize with the other overlay lines. This is hard to program "initially" i.e. find nice data types that accommodate it - trying to synthesize it from Haskore adds more woe. ... I still don't understand what Evan's reverse instrument models. Is it reversing the sound of a note so it is some function wrapping a unit generator? Or is it reversing a sequence of notes according to pitch? The second would seem easier to implement if you have a distinction between score and orchestra. I'd guess a score and orchestra distinction makes the first easier as well, though I haven't got very far with synthesis. ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art
Re: [haskell-art] Haskell art?
Evan Laforge schrieb: > This sounds like something I've noticed, and if it's the same thing, I > agree. But I disagree that you need to separate orchestra and score > to get it. Namely that notes are described hierarchically (e.g. > phrase1 `then` phrase2 :=: part2 or whatever), but that many musical > transformations only make sense on a flat stream of notes. For > example, decide which note a string would be played on and pick a > corresponding corresponding base note + bend. You can't do this > without a memory of which notes have been played (to know currently > sounding strings) and maybe a look a little ways into the future (to > pick between alternatives). Hierarchical composition has no access > the previous and next notes, so it winds up having to be a > postprocessing step on the eventual note output stream, which means > you have to have something in between the score and the sound. By why > be limited to one one instance of this player and a static score -> > player -> sound pipeline? For other examples a hierarchical structure is exactly the right thing: Think of a filter sweep or a reverb that shall be applied during a certain time interval to a certain set of instruments, say all instruments but drums and the melody. You had to filter those events out of the performance stream and you have to specify the overall duration of the filter effect, since it cannot be derived from the performance. The performance stores only start times and durations of individual events, but it does not store trailing pauses of music sub-trees. > I'm not totally convinced the integration is valuable, but seeing as > almost all other systems don't have it, it seems interesting to > experiment with one that does and see where it leads. Maybe another > way of putting it is that different interpretations of abstract > instructions like legato are not necessarily always along instrument > (piano vs. violin) lines: it may vary from phrase to phrase, or > section to section. I think the hierarchical music structure has its value in being able to be converted to a lot of back-ends. The Performance structure is good for MIDI and Csound. The hierarchical structure is better for SuperCollider and pure Haskell signal processing, because of such effects like filter sweeps, speed variation at signal level, or reversing parts of the music. The hierarchical structure can be simply converted to Performance. I would have thought that the hierarchical structure is also better for music notation, but the actual implementations show, that it is not. ___ haskell-art mailing list haskell-art@lurk.org http://lists.lurk.org/mailman/listinfo/haskell-art