> ---------- Forwarded message ---------- >> From: Urs Liska <li...@openlilylib.org> >> To: Carlo Stemberger <carlo.stember...@gmail.com> >> Cc: lilypond-user <lilypond-user@gnu.org> >> Subject: Re: GSoC applications > >
Trying to kick off this conversation about the possibility of a GSoC project around video, starting with Knut Petersen's mkvideo. Although it would be good to compare this with the other video or animation approaches. Please let your thoughts be known. To recap what mvvideo is and does, in terms of in and out of the pond, the distro includes the videohelper.ly library, and a diff for patchinging a few scm files. Using this library and the patch, you compile the book with lilypond and produce: 1) A PDF of the score that has a page for every distinct event, with different note colorations on each page. 2) A file of the events and their timings. This includes both note events and page events. This file also contains other configuration information used downstream by the mkvideo script. 3) MIDI file of the score. (The actual demo contains multiple scores, but I'm simplifying it for this discussion.) These artifacts are then used by a bash script mkvideo to: 1) produce the audio from the MIDI. (fluidsynth) 2) create a video that animates ths score by way of: create single-page pdfs from each page of the PDF produced above (pdftk) create videos of each page that have the appropriate duration (ffmpeg) 3) concatenate these videos into a single video and sync with the audio (ffmpeg) There are lots of other things I'm glossing over, like handling the title page, creating silence and normalizing audio, adding a metronome track, and managing parallel processing for batch processing multiple videos. Also, there are other shell dependencies beyond fluidsynth, pdftk and ffmpeg. Let's look at the Lilypond usage The videohelper.ily contains mostly scheme for 1) definitions for controls used by the non-lilypond script 2) time-formatting functions 3) note coloring functions These result in the following context definitions: \layout { \context { \Staff \consists #(make-engraver (listeners (time-signature-event . format-time))) } \context { \Voice \consists #(make-engraver (listeners (tempo-change-event . format-tempo))) } } \layout { \context { \Voice \override NoteHead #'after-line-breaking = #mkvideo-dump \override Rest #'after-line-breaking = #mkvideo-dump \override MultiMeasureRest #'after-line-breaking = #mkvideo-dump \override NoteHead.layer = 3 } } \paper { #(define (page-post-process layout pages) (after-pb-processing layout pages)) } The way to use this with lilyopnd books amounts to: \book { ... % The title page. There must be exactly _one_ title page . \markup { ... } \pageBreak % Start score on page 2 \score { ... } } \pdfforvideo \book{ \score { << ... >> \midi {} } } \midiforvideo The diffs to scm files are mainly to support coordinating time with color dump-page in framework-ps.scm a new time-aware function \MKVIDsetrgbcolor in music-drawing-routines.ps and tweaks to color in output-lib.scm and output-ps.scm to support this I guess the first question is, whether the library and patches provide enough of a feature to make it worthwhile. Again, the output is a PDF with one page per event, and a file with event timing information. It seems like it ought to be useful, given what this enables the shell script to do. Presumably, other implementations could also start with those artifacts and do something useful using other toolsets. So, the lilypond part of it would not actually be a video, but precursors to a video. What are your thoughts? Thanks, David Elaine Alt 415 . 341 .4954 "*Confusion is highly underrated*" ela...@flaminghakama.com skype: flaming_hakama Producer ~ Composer ~ Instrumentalist -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
_______________________________________________ lilypond-user mailing list lilypond-user@gnu.org https://lists.gnu.org/mailman/listinfo/lilypond-user