�
so agreeing pretty much with everyone else, here are my $0.02 :




---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync ?

From: "David Olofson" <da...@olofson.net>

Date: Sat, August 13, 2016 6:16 pm

To: "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>

--------------------------------------------------------------------------



>

> As to MIDI, my instinctive reaction is just, why even bother? 31250

> bps. (Unless we're talking "high speed" MIDI-over-USB or something.)

> No timestamps. You're not going to get better than ms level accuracy

> with that, no matter what. All you can hope to there, even with custom

> hardware, is avoid to make it even worse.

>

> BTW, I believe there already are MIDI interfaces with hardware

> timestamping. MOTU Timepiece...?

>

> Finally, how accurate timing does one really need?

>
first of all, the time required for 3 MIDI bytes (1 Status Byte and 2 Data 
Bytes) is about 1 millisecond. �at least for MIDI 1.0 (5-pin DIN MIDI, i dunno 
what it is for USB MIDI). �so there is that minimum delay to start with.
and, say for a NoteOn (or other "Channel
Voice Message" in the MIDI standard), when do you want to calculate the future 
time stamp? based on the time of arrival of the MIDI Status Byte (the first 
UART byte)? or based on the arrival of the final byte of the completed MIDI 
message? what if you base it on the former (which should lead to
the most constant key-down to note-onset delay) and, for some reason, the 
latter MIDI Data Bytes don't arrive during that constant delay period? �then 
you will have to put off the note onset anyway, because you don't have all of 
the information you need to define the note onset.
so i
agree with everyone else that a future time stamp is not needed, even if the 
approximate millisecond delay from key-down to note-onset is not nailed down.
the way i see it, there are 3 or 4 stages to dealing with these MIDI Channel 
Voice Messages:
1. MIDI status byte received, but the
MIDI message is not yet complete. �this is a your MIDI parser working like a 
state machine. �email me if you want C code to demonstrate this.
2. MIDI message is complete. �now you have all of the information about the 
MIDI NoteOn (or Control message) and you have to take that
information (the MIDI note number and key velocity) and from that information 
and other settings or states of your synth, you have to create (or change an 
existing "idle" struct to "active") a new "Note Control Struct" which is a 
struct (or object, if you're C++) that
contains all of the parameters and states of your note while it proceeds or 
evolves in time (ya know, that ADSR thing). �once the Note Control Struct is 
all filled out, then your note can begin at the next sampling instance (or 
sample block interrupt, if you're buffering your samples in blocks
of 8 or 16 or 32 samples, this causes another 0.7 millisecond of jitter on the 
note onset).
3. while your note is playing, you are expecting to eventually receive a 
NoteOff MIDI message for that note. �when that complete MIDI message is 
received, you have to find the particular Note
Control Struct that corresponds to the MIDI channel and note number and modify 
that struct to indicate that the note will begin dying off. �perhaps all you 
will do is apply an exponentially decaying envelope to the final note 
amplitude, but you *could* have an exit waveform of some sort.
�you can't just instantly silence the note because that will click or pop.
4. assuming there's no "note stealing" going on, after the NoteOff message 
arrived and when the Note Control Struct indicates that the note has completely 
died off to an amplitude of zero, then the Note
Control Struct can be returned to an "idle" state and be ready for use for the 
next NoteOn.
�
but with MIDI mergers and such, there is a jitter of a fraction of a 
millisecond just receiving the complete MIDI message anyway. �as long as your 
real-time synth dispatches
the NoteOn and NoteOff immediately after the MIDI message is received and 
before the next sample processing block (like 8 or 16 or 32 sample periods), 
you may already have a delay jitterring between 1 and maybe 2 milliseconds 
between key-down and note-onset anyway. �i think that should be
reasonably tolerable. �no?
if not, then you really have to timestamp the note onset to be, say, 2 
milliseconds (like 88 samples) into the future past when the MIDI Status Byte 
is first received. �you can do this either processing each sample, one at a 
time, or if blocking samples
together in 8 or 16 or 32 sample blocks.
assuming no jitter in receiving the very first MIDI Status Byte (dunno about 
this with Running Status), would a constant delay of 2 ms be better than a 
jittery delay of 1 to 2 ms? �which is better? �which is more "realistic"
sounding? �i confess that i really do not know.

--
r b-j � � � � � � � � � � �r...@audioimagination.com
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to