On Tue, September 17, 2013 1:05 am, Craig Parmerlee wrote:
> I would consider audio-to-notation to be a breakthrough.

I dislike transcribing, and almost always turn down jobs because they just
don't pay enough to overcome that dislike. This would be really useful for the
clients who want transcribing, a tedious job perfect for a computer. Then all
that would be needed was proofreading and cleaning up.

> I don't really understand a preference for the pedestrian and an
> antipathy for the big ideas.  I certainly want to see bugs fixed, but
> really, our expectations should be higher than that.  It should not be
> an either-or proposition.

Exactly.

> today's DAWs go far beyond
> what was possible with reel-to-reel recording decks.

Sort of. Most of the stuff -- other than looping, stretching, and certain
non-linear features -- is still oriented toward the input, mixing board, efx,
buss, output. The use of algorithms and parametric analysis and adjustment is
pretty much a separate part of the program, or found in external packages like
AudioMulch, Max, etc.

> the products just don't exemplify the
> vision we have seen in most other technology fields.

Yes.

> 1) In 2013 I shouldn't still have to fiddle with layouts on my parts.

Yes.

> 2) How long have we had spell checkers and grammar checkers in word
> processors?  15-20 years anyway.  Why don't we have the same things in
> music notation by now?  Why isn't there a function that says "In measure
> 14, 2 instruments have B naturals that are in conflict with the B-flats
> in 5 other parts."  Why isn't there a function that evaluates my voice
> leading and suggests better options?

This is both a proofing and learning tool. Suggested changes are always
welcome, even if I guess I'd reject most of them. (And I still don't get Lon's
and others' aversion to C-flats.)

> 3) Much of harmonization  is more-or-less formulaic.  Finale offers a
> little help with the BIAB Harmonization plug-in, but by now, the state
> of the art should be much farther down the road.  I'd like to point the
> notation program to an audio snippet from an Elgar symphony and tell the
> notation program "Suggest a harmonization of my melody that is like
> that."  OK, I don't expect that in the next year, but people developing
> notation programs should have goals like this -- a much bigger vision
> than just how big to make the margin on the page.

Not something I'd generally need, but for popping out quick arrangements using
a 'dial' to choose style/period/etc, including the harmonization equivalent of
"human playback" it would be really fine.

All the chord stuff you've suggested is nothing I'd use.

But what I would use would be an algorithmic plug-in system, and something
like Barlow's Autobusk, a program that evaluates existing compositions for
some 15 or so parameters (I don't remember exactly how many) and creates music
like it. Varying the parameters moves the generated music increasingly away
from one style to another -- say, Mozart à Schoenberg. Plugins for genetic
algorithms, fractals, etc., would be great. Cross-program, real-time
integration (Finale and AudioMulch or Max, Finale and Vegas, etc.) would be
remarkable. I can edit my audio in Vegas, for example, so why not my notation?
I can sync to video frames in Finale, but it would be great to see my video
edits show up in real time and the music give me the option to shift/cut/etc
to accommodate. Web integration, realtime Skype (or similar) integration, etc.
The application of audio plugins to the playback system (apply electric violin
effects to the violin line, defining the symbols, or, as you already
mentioned, looping features). Integration with a word processor so that edited
text can be updated or composed, and the algorithm create the basic note
structure. I don't have to do everything down to the details at first; I think
Cage taught us that much 60 years ago. So why should I be limited to pre-Cage
thinking? Let me pop in structures and algorithms and loop points, and adjust
to my taste later. One of my friends composes by using his own algorithms to
generate masses of material for preview and he plucks out bits as it goes
along, but has to effortfully convert it to notation later. Why not in real
time? It's very Mozartean where only the details matter and the structure can
be a dice game, so to speak.

Lon reports what he wouldn't use. There are things I don't use -- chords being
a big one. I want Finale not only configurable to my style sheets but also
easily programmable and linked to other musical software that I use daily. I
think Jari is getting to some of that, at least within the Finale context; his
latest ability to script graphical elements is incredibly important.

Okay, all for now. Just some random (and unproofread) thoughts for the 'big
picture' the Craig is suggesting.

Dennis






_______________________________________________
Finale mailing list
Finale@shsu.edu
http://lists.shsu.edu/mailman/listinfo/finale

Reply via email to