> assuming no jitter in receiving the very first MIDI Status Byte (dunno about 
> this with Running Status), would a constant delay of 2 ms be better than a 
> jittery delay of 1 to 2 ms?  which is better?  which is more "realistic" 
> sounding?  i confess that i really do not know.
> 
Just a reminder that we’re pretty used to large jitter in hardware synths. I 
can’t speak to the latest synths, but a Roland D50 was a staple of my keyboard 
rack for years, and I would only play pads with it—I found the jitter too 
distracting to play percussive, piano-like sounds. I can withstand crazy 
latency if I have too—I used to play a pipe organ with pipe located high and 
wide in an auditorium, and my fingers would be easily three or four notes ahead 
of what I was hearing when playing Bach. But the D50 latency was highly 
variable, probably dependent on where a service loop was relative to the 
keypress, out to at least the 20 ms range I’m pretty sure.

The OB8 round trip was about 17 ms when the DSX sequencer was attached (it 
connected directly to the cpu bus, and extended the loop time).

I replaced my D50 with a Korg Trinity, which gave me great relief on the 
latency issue. However, it became too spongy for me when switch to Combo mode, 
so I avoided it. It was in turn replaced by a Korg Trinton Extreme after a 
burglary, which was better on the Combos. That was my last synth with a 
keyboard.

A piano has latency, but it’s less the faster you play. I can’t think of an 
acoustic instrument that jitters, offhand. But there’s still a lot of jitter 
between exactly when you’d like to play a note, ideally, and when you pluck bow 
or blow it.

Anyway, if you’ve played synths over the years, you’re used to substantial 
latency and jitter. You’ll still get it playing back from MIDI. Typically, they 
poll the incoming notes in a processing loop.

So, other than the potential phase issues of coincident sounds in certain 
circumstances, I don’t think it matters in your question—2 ms delay or 1-2 ms 
jitter.


> On Aug 13, 2016, at 8:10 PM, robert bristow-johnson 
> <r...@audioimagination.com> wrote:
> 
>  
> so agreeing pretty much with everyone else, here are my $0.02 :
> 
> 
> 
> ---------------------------- Original Message ----------------------------
> Subject: Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync ?
> From: "David Olofson" <da...@olofson.net>
> Date: Sat, August 13, 2016 6:16 pm
> To: "A discussion list for music-related DSP" <music-dsp@music.columbia.edu>
> --------------------------------------------------------------------------
> 
> >
> > As to MIDI, my instinctive reaction is just, why even bother? 31250
> > bps. (Unless we're talking "high speed" MIDI-over-USB or something.)
> > No timestamps. You're not going to get better than ms level accuracy
> > with that, no matter what. All you can hope to there, even with custom
> > hardware, is avoid to make it even worse.
> >
> > BTW, I believe there already are MIDI interfaces with hardware
> > timestamping. MOTU Timepiece...?
> >
> > Finally, how accurate timing does one really need?
> >
> 
> first of all, the time required for 3 MIDI bytes (1 Status Byte and 2 Data 
> Bytes) is about 1 millisecond.  at least for MIDI 1.0 (5-pin DIN MIDI, i 
> dunno what it is for USB MIDI).  so there is that minimum delay to start with.
> 
> and, say for a NoteOn (or other "Channel Voice Message" in the MIDI 
> standard), when do you want to calculate the future time stamp? based on the 
> time of arrival of the MIDI Status Byte (the first UART byte)? or based on 
> the arrival of the final byte of the completed MIDI message? what if you base 
> it on the former (which should lead to the most constant key-down to 
> note-onset delay) and, for some reason, the latter MIDI Data Bytes don't 
> arrive during that constant delay period?  then you will have to put off the 
> note onset anyway, because you don't have all of the information you need to 
> define the note onset.
> 
> so i agree with everyone else that a future time stamp is not needed, even if 
> the approximate millisecond delay from key-down to note-onset is not nailed 
> down.
> 
> the way i see it, there are 3 or 4 stages to dealing with these MIDI Channel 
> Voice Messages:
> 
> 1. MIDI status byte received, but the MIDI message is not yet complete.  this 
> is a your MIDI parser working like a state machine.  email me if you want C 
> code to demonstrate this.
> 
> 2. MIDI message is complete.  now you have all of the information about the 
> MIDI NoteOn (or Control message) and you have to take that information (the 
> MIDI note number and key velocity) and from that information and other 
> settings or states of your synth, you have to create (or change an existing 
> "idle" struct to "active") a new "Note Control Struct" which is a struct (or 
> object, if you're C++) that contains all of the parameters and states of your 
> note while it proceeds or evolves in time (ya know, that ADSR thing).  once 
> the Note Control Struct is all filled out, then your note can begin at the 
> next sampling instance (or sample block interrupt, if you're buffering your 
> samples in blocks of 8 or 16 or 32 samples, this causes another 0.7 
> millisecond of jitter on the note onset).
> 
> 3. while your note is playing, you are expecting to eventually receive a 
> NoteOff MIDI message for that note.  when that complete MIDI message is 
> received, you have to find the particular Note Control Struct that 
> corresponds to the MIDI channel and note number and modify that struct to 
> indicate that the note will begin dying off.  perhaps all you will do is 
> apply an exponentially decaying envelope to the final note amplitude, but you 
> *could* have an exit waveform of some sort.  you can't just instantly silence 
> the note because that will click or pop.
> 
> 4. assuming there's no "note stealing" going on, after the NoteOff message 
> arrived and when the Note Control Struct indicates that the note has 
> completely died off to an amplitude of zero, then the Note Control Struct can 
> be returned to an "idle" state and be ready for use for the next NoteOn.
> 
>  
> but with MIDI mergers and such, there is a jitter of a fraction of a 
> millisecond just receiving the complete MIDI message anyway.  as long as your 
> real-time synth dispatches the NoteOn and NoteOff immediately after the MIDI 
> message is received and before the next sample processing block (like 8 or 16 
> or 32 sample periods), you may already have a delay jitterring between 1 and 
> maybe 2 milliseconds between key-down and note-onset anyway.  i think that 
> should be reasonably tolerable.  no?
> 
> if not, then you really have to timestamp the note onset to be, say, 2 
> milliseconds (like 88 samples) into the future past when the MIDI Status Byte 
> is first received.  you can do this either processing each sample, one at a 
> time, or if blocking samples together in 8 or 16 or 32 sample blocks.
> 
> assuming no jitter in receiving the very first MIDI Status Byte (dunno about 
> this with Running Status), would a constant delay of 2 ms be better than a 
> jittery delay of 1 to 2 ms?  which is better?  which is more "realistic" 
> sounding?  i confess that i really do not know.
> 
> 
> --
> 
> r b-j                      r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to