>  > I still don't even know how to download the library.
>
>  The
>
> http://www.cs.cmu.edu/~music/portmusic/
>  page has a link for "source" that takes you to a Wiki. There, by "Help 
> getting started" if you click
"mac" or "windows", you should get some useful information. E.g. under
"mac", the text begins:

Aha, I knew there was something obvious I was missing.  It even says
"don't look for a download there".  I guess as soon as I saw
sourceforge I automatically went to the downloads tab, since that's
what I'm used to doing.

>  So, yes, you should install an svn client (pointers to the clients we use 
> are provided).

I actually wound up doing that last night anyway, since the old os x
separate portmidi project didn't work right.

>  > Is there an easier way?
>  As soon as we integrate some changes and test on all platforms, I will start 
> maintaining a zip file for people to download. Right now, SVN is the way to 
> go.

Great.  If it looks like portmidi will serve my needs, I'd be willing
to help out a little, if I can.  Before looking at portmidi I did a
survey of various sequencers out there, and almost all of them have
alsa or core midi hardcoded, so there is clearly a need.

>  ABOUT TIMESTAMPS
>
>  Here's the design rationale: If your application can perform consistently 
> with low latency, you should set latency to zero and dispense with any 
> queueing or scheduling in PortMidi. Since you are running with low-latency, 
> you can write your own scheduler and get good performance. If your 
> application CANNOT perform consistently with low latency, then you really 
> don't want to be sending messages that you expect to go out immediately (even 
> if they did, you have no guarantee that your application is not already 
> significantly behind). In that case, you want to use a latency sufficiently 
> high that your application's latency will not significantly impact MIDI 
> timing. E.g. even if your app falls behind by 50ms, if your latency is >50ms, 
> then the MIDI can be delivered with accurate timing.

Right, my app is definitely in the high-latency camp.  It's doing
complicated things to come up with midi.  However, that midi has
absolute timestamps.  My plan is that the midi generation code comes
up with a [(Port, Timestamp, Msg)] list which it then feeds to
write_msg.  write_msg queues the msgs internally (in a high prio
accurate thread, preferably provided by the OS) and plays them back
when their time has come.  I plan to have the midi generation begin as
soon as a change occurs, so as soon as the user hits play, the app
should be easily far enough ahead (if it's not there would be some
notes crunched together at the beginning).  When you hit play, it
simply adds the current time to the timestamps and feeds the msg
stream to the output.

>  If you really want to use PortMidi (or the underlying device driver) to do 
> timing for you in some cases, and you want MIDI to go out immediately in 
> others, I'd suggest setting latency to 1ms, which may not be noticeable. If 
> it is, then I believe you can just set the latency to 1ms and subtract 1 from 
> all your timestamps. If you make a distinction between "send now" and "send 
> in the future" then you can use a timestamp of 0 for "send now" and a real 
> timestamp for "send in the future" (The delivery time for "send in the 
> future" will be timestamp+latency).

I'd like the driver to do timing for me when playing (so it should
obey the timestamps), but not when sending midi thru, I figured I'd do
the latter by just setting the timestamps to <= now.

So I guess the way to go is +1 latency, -1 on the timestamps.  That
seems convoluted to me, compared to the midi layer simply sending the
msgs out when you tell it to, and it's your problem if you tell it
something in the past.  Maybe this comes from different starting
assumptions: portmidi seems to be assuming that the sequencer is
generating midi in a way that's synchronized to the output, so a
little jitter in the app has to be smoothed out with latency, where
I'm assuming that the app generates midi asynchronously (possibly
blocking if it gets >n seconds ahead) and by the time the midi layer
sees it, it's a (Timestamp, Msg) list with constant access time.

The second assumption makes more sense to me (naturally)... why would
you pretend to play back midi synchronously if you can't do so
consistently?

>  ABOUT MIDI THRU
>
>  Good question: why not do MIDI THRU in a high-priority callback? If all you 
> wanted was THRU with no output

Yeah, sysex is usually the fly in the ointment.  CoreMIDI actually
claims to merge midi streams if you use its "endpoint" stuff, I don't
know how well that works and what it does with sysex.

I had always been assuming I would do simple thru (i.e. static
remapping of channels and ports) in a C callback, and then do more
complicated thru (arbitrary transform) in the app.  I gave the second
a try in a test app, and performance is acceptable (some smear on a
big cluster unfortunately), so I think I'll stick with the simpler
second option.

As far as merging streams goes, we're back to the internal buffer
again.  To me it seems like all you have to do is pass the sysex and
realtime msgs through, and buffer the others until you hit EOX, at
which point they all go out.  From a performance perspective you'd
want note data to be higher prio than sysex, but I can't see how you
could do that in MIDI.

>  The solution with PortMidi is polling: wake up every millisecond or so, 
> check for input and forward it to the output. You should use the same thread 
> to generate the application's midi stream so that THRU data is merged 
> properly with the generated midi output.

Yeah, this is what I'm doing, since Pm_Read doesn't block anyway.  The
thing is, it just pushes the sysex merge problem to me, forcing me to
either merge realtime sync msgs into the sysex, or use Pm_WriteSysEx
which is effectively making sysex writes atomic like you said, and
locks up all midi output on that stream until its done, which means I
couldn't both use sysex and have someone else sync to my midi.  I'll
probably wind up implementing the merge algorithm I described above in
my app.

Speaking of blocking and polling, I loop on Pm_Read, sleeping for 1ms
if none were read.  Why not block on a read that would return 0
events?

>  > What's so bad about portmidi having a big sysex buffer?
>  I think SysEx is really a basic problem of the MIDI spec. Whatever you 
> decide, there are arguments pro and con (some indicated above). If you want 
> SysEx without real-time messages embedded, you might be able to filter the 
> real-time messages using PortMidi's filtering capabilities. Otherwise, you're 
> going to have to take the sysex data out of PortMidi's buffer, strip off the 
> timestamps, and check for the EOX byte somewhere in your application, so it's 
> a very

Right, this is what I'm doing.

> small additional step to strip out real-time messages (maybe one line of 
> code). This seemed
(to me) to be a better approach than to buffer things internally in
PortMidi. How big should a sysex buffer be? 100K? Wrong -- not big
enough for sample dumps. There's no upper bound. Is it OK to call
malloc on the fly? Remember that you're only doing the buffering
because there are real-time messages that need to get through. Can you
afford to wait for malloc (an unbounded time) before delivering a
real-time MIDI message? (Most of the time, I would agree the answer is
yes, but I really don't want to tell users that all real-time bets are
off if you are receiving SysEx data; some users actually use SysEx for
short application-specific messages in real-time.)

Right, but the fact is that, if you want both sysex and realtime msgs,
*someone* has to do the buffering.  If it's not portmidi, it's the
app.  The app is going to have to allocate a fixed buffer to avoid
malloc(), which means that sysex+realtime requires a static buffer
size regardless.

Since people doing sysex+realtime are probably sending small sysexes,
I think you could do well by having a size given for a fixed buffer,
and then allocating additional chunks after that dynamically.  That
way, sysexes below the provided buffer size are guarenteed to not call
malloc(), but you'll still buffer those 10MB SDS msgs that take 10
minutes to send (does anyone really still use those?), just not
guarantee timely realtime delivery in that case.

MIDI is so slow compared to modern memory size I don't think sticking
in a 1MB buffer is a big deal (that's a 5 minute sysex dump), and it
will be enough for just about all users.  A theoretical someone
writing a midi player for a handheld (I bet even those have GBs of
memory now though) can just set the buffer low or filter sysex.

>  I changed "In some cases -- see below -- PortMidi ... " to "In some cases, 
> PortMidi ..." -- I don't know why "see below" was in there. Thanks.

Cool, I think it would be nicer to say "see the implementation docs"
and then put it in the platform specific readme.  If I can figure out
how to send an svn patch I'd be happy to do it myself.

>  ABOUT TIMESTAMPS
...
>  To avoid more confusion, perhaps an example will help: if I provide a 
> timestamp of 5000, latency is 1, and time_proc returns 4990, then the desired 
> output time will be when time_proc returns timestamp+latency = 5001. This 
> will be 5001-4990 = 11 ms from now.

Much clearer!  That example in the comment would have been nice.

> > invalid assumption the os x code made about how CoreMIDI split up
>  > sysex msgs
>  That sounds familiar -- the CoreMIDI spec is ambiguous and I remember we 
> guessed wrong about some packet structure conventions. At least several 
> things have been fixed in the CoreMIDI implementation.

I just hooked up the sysex part and tried some dumps, and they seem to
agree with the synth when I send them back, so it looks like this bug
is fixed.

>  Thanks again for your input. I hope I can encourage you to use PortMidi 
> rather than rolling your own API. We've

I agree entirely.


BTW, I remember reading about nyquist in an issue of CMJ a long time
ago when I was in junior high school.  At the time, I thought it was
the only elegant music language I'd ever seen (having just suffered
through learning csound and getting started on common music), but was
discouraged by the xlisp implementation (at the time, crashing, in
addition to generally primitive error reporting and lack of all the
nice lisp features), lack of referential transparency (assign your
sound to a variable and the behaviour changes dramatically), a general
paucity of unit generators, and that whole no realtime thing
(actually, on my 486 I wasn't thinking about that yet).  They're
mostly just maturity issues, and I still haven't seen any systems that
I like as much from a theoretical point of view as nyquist.  Everyone
else (supercollider, kyma, reaktor, csound, ...) limits you to first
order programming (e.g. you can compose sounds, but you can't compose
compositions), and has a strict orchestra / score separation.

Reading about nyquist, even if I didn't use it much at the time,
started a long chain of thoughts which has finally resulted in me
wanting to write a sequencer today.  Even though the main plan is now
to render to midi and osc as well as csound and nyquist and whatever
else, and the main editing is graphical, part of the plan is to
incorporate composition and nyquist style behavioural abstraction.  So
thank you very much for the inspiration from way back then!  I still
have that issue of CMJ and have been looking at the articles again
lately as I think about how to implement behavioural abstraction,
especially wrt to absolute and relative versions of tempo and my
flavor of graphical editing.

Since nyquist looks like it's still alive, and has even seen
development since then, I'll experiment with it some and see what
things look like now.  Is nyquist the end of the line for you for
music oriented languages, or are you working on something new
nowadays?
_______________________________________________
media_api mailing list
[email protected]
http://lists.create.ucsb.edu/mailman/listinfo/media_api

Reply via email to