Re: GSoC: MusicXML export

2016-03-24 Thread John Gourlay
David, et al.,

An interesting test of both MusicXML export and import would be to make the 
round trip and see what errors result. There is already a set of MusicXML files 
for testing in input/regression/musicxml. Each of these could be converted to 
LilyPond using musicxml2ly and then back to XML using the MusicXML export. 
Checking for error messages would be an easy first step; checking for 
significant differences between the two XML files would be harder but still 
useful.

Unfortunately, I’m still working on musicxml2ly, incorporating the changes made 
in the Philomelos project. A little bit at a time I'm correcting errors 
produced by the Philomelos version processing these same XML files. Maybe if 
you tell me when you might be ready to do this kind of testing this will 
motivate me to complete a version of musicxml2ly that you can use.

John Gourlay
___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Re: midi articulation

2016-03-24 Thread Carl Sorensen
On 3/24/16 8:14 PM, "Daniel Birns"  wrote:

>Thanks for your kind note.
>
>Of course, I completely understand the protectiveness. That¹s how it must
>be. Before I do anything, I¹d like to discuss my thoughts on the matter.
>
>Inside vs. Outside
>__
>
>OUTSIDE: The Unix idea is to have many small, testable apps that are
>designed to be able to work together. In this view, one should have an
>Engraver (aka lilypond) as a single app that doesn¹t try to do anything
>else. In addition, the lilypond syntax becomes the key interface, and
>lilypond can be the Engraver as well as the Validator. Midi should be a
>separate app that reads the same source and generates midi.
>
>Of course, there are other ways to layer this, in the unix model. Perhaps
>there should be a parser which generates an internal format, and then
>lilypond could read that, and the midi generator could also read the
>internal format. 
>
>Perhaps lilypond already has a suitable internal-format, post-parsed
>format that would be more suitable for a midi-generator.


I believe that LilyPond already has the built-in hooks to create midi
output.  There are performers (similar to engravers) whose job it is to
create the midi.  But I think that we have not spent a lot of time to make
sure
that the midi created is high quality, in terms of respecting
articulations and other expressive indications.  Hence articulate.ly

>
>INSIDE: Generating midi out to be a great deal easier in lilypond because
>it has already parsed everything, ought to know, at any point in time,
>all the information necessary to generate better midi output. Currently,
>it¹s missing many things, like note-durations, and so on. I wonder why
>that is? I¹m suspecting there¹s a fundamental reason, and that it¹s
>because the information is split up among all the various engravers?
>
>Midi is a big subject, and keeping midi generation inside lilypond may
>generate fears, probably well-founded, of increasing size and complexity.

I don't think there is any particular concern about generating midi.  I
think any concern that exists would likely be about having poor-quality,
unmaintainable code for the midi generators.  Or using hacks rather than
solid architecture.

>
>Midi Sounds
>__
>
>I¹m no midi expert, but I¹ve used it over the years. My impression is
>that tools like Apple Logic, Sibelius, and so on, provide their own
>sounds, which are unrelated, in particular, to midi, which only gives a
>slight clue to the desired sound. To give a reasonable midi result, I
>believe we must go that route: provide a sound library, and a player. A
>user would then be able to write a lilypond score, and get a reasonable
>audio playback of that score. We could generate midi as we have always
>done, but the midi would have much better articulation and dynamics than
>it currently does.

I really don't think this is the right approach.  LilyPond should not be
developing sound libraries or players.  There already exist high-quality
sound libraries, and full-featured midi players.  Why should LilyPond
reinvent the wheel?  What are the weaknesses and/or limitations of, for
example, Qsynth or Timidity++?

I'm quite on board with recommending a particular midi player and/or sound
font.  But I don't see how creating a new synthesizer is necessary (or
desirable) to improve articulation and dynamics.  While I'm not an expert,
I would expect that the appropriate MIDI commands can be embedded in a
MIDI file created by LilyPond, and we just need to make sure a known good
sound library and player are available.

Why do you think we need a new player?

Thanks,

Carl


___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Re: midi articulation

2016-03-24 Thread Daniel Birns
Thanks for your kind note.

Of course, I completely understand the protectiveness. That’s how it must be. 
Before I do anything, I’d like to discuss my thoughts on the matter.

Inside vs. Outside
__

OUTSIDE: The Unix idea is to have many small, testable apps that are designed 
to be able to work together. In this view, one should have an Engraver (aka 
lilypond) as a single app that doesn’t try to do anything else. In addition, 
the lilypond syntax becomes the key interface, and lilypond can be the Engraver 
as well as the Validator. Midi should be a separate app that reads the same 
source and generates midi. 

Of course, there are other ways to layer this, in the unix model. Perhaps there 
should be a parser which generates an internal format, and then lilypond could 
read that, and the midi generator could also read the internal format. 

Perhaps lilypond already has a suitable internal-format, post-parsed format 
that would be more suitable for a midi-generator.

INSIDE: Generating midi out to be a great deal easier in lilypond because it 
has already parsed everything, ought to know, at any point in time, all the 
information necessary to generate better midi output. Currently, it’s missing 
many things, like note-durations, and so on. I wonder why that is? I’m 
suspecting there’s a fundamental reason, and that it’s because the information 
is split up among all the various engravers?

Midi is a big subject, and keeping midi generation inside lilypond may generate 
fears, probably well-founded, of increasing size and complexity.

Midi Sounds
__

I’m no midi expert, but I’ve used it over the years. My impression is that 
tools like Apple Logic, Sibelius, and so on, provide their own sounds, which 
are unrelated, in particular, to midi, which only gives a slight clue to the 
desired sound. To give a reasonable midi result, I believe we must go that 
route: provide a sound library, and a player. A user would then be able to 
write a lilypond score, and get a reasonable audio playback of that score. We 
could generate midi as we have always done, but the midi would have much better 
articulation and dynamics than it currently does. 

I’m certain I’m not the first to think about all this…

-d

> On Mar 24, 2016, at 4:37 PM, Carl Sorensen  wrote:
> 
> On 3/23/16 11:06 PM, "lilypond-devel-bounces+c_sorensen=byu@gnu.org on
> behalf of Daniel Birns"  on behalf of daniel.bi...@icloud.com> wrote:
> 
>> Hi developers,
>> 
>> We¹ve been having a discussion about midi. I could say a lot about this,
>> but I¹m sure some developers have thought a great deal about this, and
>> probably for many years, so I don¹t want to second-guess.
>> 
>> As I report in the following thread, I¹m a software developer, mostly
>> experienced in C++, and have thought maybe I could assist in such a
>> project, but naturally I would first want to know if a) there¹s any
>> interest in outside developers contributing to the midi output quality
>> and b) what has gone before.
> 
> 
> We're always open to having new developers come and contribute to LilyPond.
> 
> The original team was (as has been mentioned) heavily focused on creating
> printed scores, so midi output was just included for "proof-listening",
> and generating high-quality midi was not a concern.
> 
> A few years in to the development, articulate.ly was developed to help in
> automatic generation of robotic music, IIRC.
> 
> You're not alone in wishing we had better midi output; users have
> requested it multiple times.
> 
> If you're willing to jump in and develop better output, we'd love to have
> you do so.
> 
> From time to time, you may find us protective of the code base.  But we
> are open to different ways of doing things when a developer is willing to
> invest time and show that the new way is sane.  SO if we complain about
> some of your initial attempts, please don't think we're trying to shut you
> out.  I promise you, we're not.
> 
> I think that you could get some history by searching the issues list for
> midi.
> 
> I hope you'll join us!
> 
> Thanks,
> 
> Carl
> 


___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Re: midi articulation

2016-03-24 Thread Carl Sorensen
On 3/23/16 11:06 PM, "lilypond-devel-bounces+c_sorensen=byu@gnu.org on
behalf of Daniel Birns"  wrote:

>Hi developers,
>
>We¹ve been having a discussion about midi. I could say a lot about this,
>but I¹m sure some developers have thought a great deal about this, and
>probably for many years, so I don¹t want to second-guess.
>
>As I report in the following thread, I¹m a software developer, mostly
>experienced in C++, and have thought maybe I could assist in such a
>project, but naturally I would first want to know if a) there¹s any
>interest in outside developers contributing to the midi output quality
>and b) what has gone before.


We're always open to having new developers come and contribute to LilyPond.

The original team was (as has been mentioned) heavily focused on creating
printed scores, so midi output was just included for "proof-listening",
and generating high-quality midi was not a concern.

A few years in to the development, articulate.ly was developed to help in
automatic generation of robotic music, IIRC.

You're not alone in wishing we had better midi output; users have
requested it multiple times.

If you're willing to jump in and develop better output, we'd love to have
you do so.

>From time to time, you may find us protective of the code base.  But we
are open to different ways of doing things when a developer is willing to
invest time and show that the new way is sane.  SO if we complain about
some of your initial attempts, please don't think we're trying to shut you
out.  I promise you, we're not.

I think that you could get some history by searching the issues list for
midi.

I hope you'll join us!

Thanks,

Carl


___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Fwd: midi articulation

2016-03-24 Thread Daniel Birns
Hi developers,

We’ve been having a discussion about midi. I could say a lot about this, but 
I’m sure some developers have thought a great deal about this, and probably for 
many years, so I don’t want to second-guess.

As I report in the following thread, I’m a software developer, mostly 
experienced in C++, and have thought maybe I could assist in such a project, 
but naturally I would first want to know if a) there’s any interest in outside 
developers contributing to the midi output quality and b) what has gone before.

Thanks, and I’m a grateful lilypond user.

-d

> Begin forwarded message:
> 
> From: "H. S. Teoh" 
> Subject: Re: midi articulation
> Date: March 23, 2016 at 9:44:18 PM PDT
> To: lilypond-u...@gnu.org
> 
> On Wed, Mar 23, 2016 at 05:21:13PM +, Daniel Birns wrote:
>> Hi all,
>> 
>> Okay, I see what articulate.ly does. However… has anyone tried to do
>> this within the midi generator. I debugged the python_midi.c file, and
>> I can see what it’s doing. Geez, it seems like at the point where the
>> midi file is being generated, lilypond ought to know everything about
>> what’s the score, and ought to be able to apply articulation easily
>> and much more correctly at that point.
>> 
>> Has anyone tried doing this? My guess is it’s nowhere near as easy as
>> I think it should be. But, honestly, the midi files suck. As a
>> Sibelius user, this is a showstopper — Sibelius does a beautiful job
>> at this, and it seems like it shouldn’t be difficult for lilypond to
>> do the same.
>> 
>> I’m a software developer — not python, nor guile , but just about
>> everything else, but especially C++. I could do guile and python if
>> needed. Perhaps I should try this, but is there interest? Do I have
>> time for this? And why haven’t the others who know a great deal more
>> about this than I do, done this?
>> 
>> Thanks. No criticism at all intended: I think lilypond is amazing. 
> [...]
> 
> As far as I understand it (and I could be wrong), lilypond's original
> authors did not regard midi support as important, as lilypond's original
> mission was to typeset music and do it well.  So the original midi
> support was tacked on as an afterthought and was pretty rudimentary, and
> as far as I can tell, hasn't changed very much since then.  This was
> probably one of the main reasons articulate.ly  was 
> invented in the first
> place.
> 
> Be that as it may, I for one thing would be glad if somebody would take
> the time to improve midi generation so that it's not so atrocious.  My
> ideal conception of it is to have some kind of mapping mechanism for
> translating notation into midi that offers the same configurability and
> expressive power as the notation syntax itself, such that the user would
> be able to, for example, specify how phrases should be rendered, whether
> a particular Staff should emit Expression events for crescendos or
> merely increment the velocity of the notes (as is currently done,
> piano-style), how to translate breathing marks, or any mark, for that
> matter (e.g., trill mark on a timpani Staff should emit tremolos or
> switch to a different program for rolls).  The defaults ought to produce
> pretty decent midi, but should be overridable by the user.
> 
> In any case, midi itself is rather limited, as the built-in limit of 16
> channels is too restrictive for rendering large orchestral scores. For
> this purpose I have written an auxiliary helper program that renders
> individual StaffGroup's into separate midi files, to be separately
> rendered by a software midi synth and then mixed into a single file
> afterwards.  The program also generates suitable MIDI program switching
> macros for implementing different articulations. Such functionality is
> probably beyond the scope of lilypond, though conceivably one could
> implement the necessary infrastructure for the user to be able to do
> such things within lilypond itself, as opposed to resorting to external
> tools.
> 
> 
> T
> 
> -- 
> "How are you doing?" "Doing what?"
> 
> ___
> lilypond-user mailing list
> lilypond-u...@gnu.org 
> https://lists.gnu.org/mailman/listinfo/lilypond-user 
> 
___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Re: GSoC: MusicXML export

2016-03-24 Thread Carl Sorensen
On 3/24/16 3:23 AM, "lilypond-devel-bounces+c_sorensen=byu@gnu.org on
behalf of David Garfinkle"
 wrote:

>
>*In short I'm asking:* what expressions would
>you prioritize for MusicXML export? That way I can work in the order of
>what everyone wants to see, and get as far as I can. Thanks for your help,


I would prioritize all in-staff elements that are currently missing.

I would de-prioritize chord names, fret diagrams, tablature, and similar
things.  Focus first on classical music elements.

But I'm not an expert in this.

Thanks,

Carl


___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


Re: GSoC: MusicXML export

2016-03-24 Thread Paul Morris
Hi David,

> On Mar 24, 2016, at 5:23 AM, David Garfinkle  
> wrote:
> 
> What do you think are necessary music
> expressions that should be included in the concrete goals of the project?
> Is polyphonic music & multiple parts a priority over other music
> expressions like chord symbols, fingering, glissandos, dynamics, etc. 

My view is that the aspects that are more complex and harder to implement 
should be given priority.  GSoC provides the time needed to tackle those 
things, whereas simpler aspects will more easily fit into the smaller chunks of 
spare time most people have outside of GSoC.  You probably know as well as 
anyone what the more and less complex or difficult aspects are.

That consideration could be weighed alongside use or demand for a given aspect, 
but since we ultimately would like to support everything, or at least as much 
as possible, I would still tend to favor prioritizing what is difficult to 
implement.

And I also think you’re right that getting the basics or “necessities” first 
and then moving on from there is a good idea.  Being able to export basic 
pitches and rhythms is a good milestone.

Best,
-Paul
___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel


GSoC: MusicXML export

2016-03-24 Thread David Garfinkle
Hi developers,

I'm preparing an application to continue working on MusicXML export under
GSoC. The whole application is viewable here:
https://docs.google.com/document/d/12zLGWrf6a_0J9H44Coo25s-pYsv-zH_g-oBqw6Shnrc/edit?usp=sharing,
but I am wondering about concrete goals in particular.

I've written: *"**Currently, export is limited to single-part scores with
key & time signatures, clefs, sequential notes, chords, and articulations.
This project aims to integrate and extend the current functionality,
including necessary musical expressions such as polyphonic music, multiple
parts, rests, tuplets, and ties." *What do you think are necessary music
expressions that should be included in the concrete goals of the project?
Is polyphonic music & multiple parts a priority over other music
expressions like chord symbols, fingering, glissandos, dynamics, etc. Can
you think of other necessary expressions you would like to see implemented
beyond rests, tuplets, ties? I guess what I mean by 'necessary' here is not
expressive or decorative expressions but rather the bare necessity to
define the rhythm and notes. *In short I'm asking:* what expressions would
you prioritize for MusicXML export? That way I can work in the order of
what everyone wants to see, and get as far as I can. Thanks for your help,
David
___
lilypond-devel mailing list
lilypond-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-devel