NESTED DYNAMICS
===============

Looking at the definition of the Music datatype

     data Music = Note Pitch Dur [NoteAttribute]
                | Rest Dur
                | Music :+: Music
                | Music :=: Music
                | Tempo Int Int Music
                | Trans Int Music
                | Instr IName Music
                | Player PName Music
                | Phrase [PhraseAttribute] Music

we can see that phrase attributes can be nested, allowing one to write
something like

     m = Phrase [Dyn (Crescendo 1.2)]
                ( c 5 wn [] :+: Phrase [Dyn (Diminuendo 1.2)]
                                       (e 5 wn qn []) )

In practice I think it would not be desirable to allow such nestings when
working with dynamics. How should the above music object be interpreted?
Should the inner (or outter) phrase attribute cancel the other? Should they be
combined? How?

To solve this problem I think the data representation for music objects should
be reconsidered.

VOLUME
======

The libraries that constitute Haskore should be of high level, offering abstractions
for music objects. When composing in the traditional ways (music notation) one does
not explicitly set the volume of notes. He/she specifies it using music expressions
and notations for this purpose (like pp, fff, crescendo, diminuendo).

The possibility of specifying the volume of each note allows someone to write
a music object like these:

        m1 = Phrase [Dyn PPP] (g 5 qn [Volume 50])

Wouldn't it be better to dissalow direct volume representation of each note and
force the use of dynamics to obtain that controll?

The volume of note may vary while it's played. The representation of music objects
as implemented don't allow that using volume attributes for notes.

PERFORMANCE EVENT REPRESENTATION
================================

A performance for a music object is made up of a temporally ordered sequence of
music events. But the library limit such events to notes only. There are other
controlls thar are desirable to include in a performance. For example, I want
to implement a player for dynamic variations (crescendo and diminuendo) that
achieve these efects by continuously altering the volume of the notes that belong
to the phrase. As one note event does not include both its initial volume and
final volume, I don't have a direct way of getting the desired effect.
I would like to have may player generating other events (basically the ones
supported by MIDI, as my performance will be interpreted by a MIDI sequencer)
like expression controll, vibrato control, pedal control, etc.

Would it be necessary to devise a new representation for music events?


I am willing to see comments on these topics.

Thanks.

Prof. Jose Romildo Malaquias
[EMAIL PROTECTED]
Departamento de Computacao
Universidade Federal de Ouro Preto
BRASIL


Reply via email to