Measuring…as for my D50, it seemed like at least 20 ms, conservatively (I think 
it was a more). But that’s just a gut feeling from designing microprocessor 
synths on my own in the distant past. Just did a search and saw some other D50 
latency complaints, but no times. The D550 was supposed to be an improvement. 
To me, the D50 was barely playable except for pads, so I’m surprised there were 
not more complaints. But then, as a pianist, I always felt uncomfortable with 
playing piano-like rhythmic stuff on a synth keyboard, so that probably 
compounds my awareness.

As for the OB8, I recall Jim Cooper (JL Cooper Electronics, and designer of 
early Oberheim synths) sent Tom O a note sort of scolding (or raised eyebrow) 
about the 17 ms loop time for the OB8 when the DSX was added. (Jim made various 
add-ons to Oberheim stuff after leaving the company, so I’m sure that’s why he 
was scoping it out.) But I already knew that from my own checking; I joined the 
team mid OB8 development, after the hardware was designed, worked on a bit of 
the software, mainly the arpeggiator, so I new the loop details pretty well. I 
believe the loop time was around 12 ms without the DSX. Remember, we’re talking 
Z80, and software starting to take over some control-path stuff (the Xpander 
used dual complementary 6909s, which had hardware multiply, and was the first 
to have full software modulation). A Z80 that reads the keyboard (eight keys at 
a time on a parallel port), scans the front panel, etc. Notes were updated and 
triggered when it got to that point in its loop.

Of course, I agree that on modern equipment this is unnecessary. I just giving 
an idea of what people have historically dealt with, without too much 
complaining.

I’ve long figured that my limit is about 6 ms, so interesting that you gave 
that number for drummers. If I wanted to play Ivory, the sampled paint plugin, 
it just started to feel ok to me with a 128 sample buffer on Digital Performer, 
and 64 felt much better. But note that DP used double buffers until recently, 
so at 44.1, 128 sample x 2 yielded about 5.8 ms, not counting the MIDI delay.


> On Aug 14, 2016, at 9:34 PM, robert bristow-johnson 
> <r...@audioimagination.com> wrote:
> 
>  
> well, i'm not much of a musician (i wish i were).  but i am honestly 
> surprized reading the magnitude of NoteOn delay and possible jitter.  i 
> *just* cannot imagine why *anything* like 10 ms would be needed to get the 
> note going.  how are you measuring this Nigel?
> 
> 1 ms to receive the entire MIDI NoteOn message. perhaps, because of MIDI 
> merge there is an additional (and uncertain) delay of another ms.
> 
> samples should not be buffered any more than 32 or, at the outset, 64 
> samples.  i am *not* one of them process-samples-one-at-a-time partisans and 
> i have had plenty of arguments with them (not buffering samples and not 
> processing them in blocks is very inefficient, unnecessarily inefficient).  a 
> buffer of 32 samples at 44.1 or 48 kHz is plenty long.  64 samples at 96 kHz. 
>  that comes out to be 0.666 millisecond of possible jitter since the actual 
> note onset will begin at a 32 sample (or 64 sample at 96 kHz) block boundary. 
>  but the complete MIDI NoteOn can happen any time in the previous 666 
> microseconds, so there is an uncertainty of delay of that amount.
> 
> for a piece of dedicated hardware with no stupid-ass Windows operating system 
> in the way, i just cannot understand why there would be any other need for 
> delay or uncertainty in delay.  well, if note stealing is necessary, that 
> would add some uncertainty, but let's say that your synth can play more 10 
> times more notes than you have fingers to play them.  then note stealing 
> should not be necessary.
> 
> i just can't see, unless the computer is jerking off, why any MIDI synth 
> needs more than, say, 3 ms.  with a jitter of 2 ms (i.e. the key-down to 
> note-onset must be at least 1 ms with a possible 0 to 2 ms further delay due 
> to MIDI merge issues or when the complete MIDI message is received before the 
> following 32-sample block.  and there's no good fucking reason why the sample 
> block should be longer.)
> 
> when i was working on pitch detection and pitch shifting while at Eventide 
> and another employer, the pitch detection had delays of about 15 or 20 ms and 
> the mean pitch shifting delay was about the same.  and we got complaints.  (i 
> couldn't understand a lot of it, would the guitarist complain about walking 
> 20 feet away from his amp and playing? but i know that this and other effects 
> delays add up.)  i was told that drummers can handle at most 6 ms between 
> their mic'd kit and the monitor speakers (or maybe it was the delay from the 
> bass or lead to the drums monitor).  still doesn't explain someone's 
> intolerance of 32-sample double buffering (and that added to ADC and DAC 
> delays) which is a little more than 1 ms.
> 
> but i can't, for the life of me, understand why any hardware product would 
> need a latency of 17 or 20 ms.  that's a factor of 10 longer than the minimum 
> delay i can account for.
> 
>  
> r b-j
> 
> 
> ---------------------------- Original Message ----------------------------
> Subject: Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync ?
> From: "Nigel Redmon" <earle...@earlevel.com>
> Date: Sun, August 14, 2016 1:58 pm
> To: "robert bristow-johnson" <r...@audioimagination.com>
> music-dsp@music.columbia.edu
> --------------------------------------------------------------------------
> 
> >> assuming no jitter in receiving the very first MIDI Status Byte (dunno 
> >> about this with Running Status), would a constant delay of 2 ms be better 
> >> than a jittery delay of 1 to 2 ms? which is better? which is more 
> >> "realistic" sounding? i confess that i really do not know.
> >>
> > Just a reminder that we’re pretty used to large jitter in hardware synths. 
> > I can’t speak to the latest synths, but a Roland D50 was a staple of my 
> > keyboard rack for years, and I would only play pads with it—I found the 
> > jitter too distracting to play percussive, piano-like sounds. I can 
> > withstand crazy latency if I have too—I used to play a pipe organ with pipe 
> > located high and wide in an auditorium, and my fingers would be easily 
> > three or four notes ahead of what I was hearing when playing Bach. But the 
> > D50 latency was highly variable, probably dependent on where a service loop 
> > was relative to the keypress, out to at least the 20 ms range I’m pretty 
> > sure.
> >
> > The OB8 round trip was about 17 ms when the DSX sequencer was attached (it 
> > connected directly to the cpu bus, and extended the loop time).
> >
> > I replaced my D50 with a Korg Trinity, which gave me great relief on the 
> > latency issue. However, it became too spongy for me when switch to Combo 
> > mode, so I avoided it. It was in turn replaced by a Korg Trinton Extreme 
> > after a burglary, which was better on the Combos. That was my last synth 
> > with a keyboard.
> >
> > A piano has latency, but it’s less the faster you play. I can’t think of an 
> > acoustic instrument that jitters, offhand. But there’s still a lot of 
> > jitter between exactly when you’d like to play a note, ideally, and when 
> > you pluck bow or blow it.
> >
> > Anyway, if you’ve played synths over the years, you’re used to substantial 
> > latency and jitter. You’ll still get it playing back from MIDI. Typically, 
> > they poll the incoming notes in a processing loop.
> >
> > So, other than the potential phase issues of coincident sounds in certain 
> > circumstances, I don’t think it matters in your question—2 ms delay or 1-2 
> > ms jitter.
> >
> >
> >> On Aug 13, 2016, at 8:10 PM, robert bristow-johnson 
> >> <r...@audioimagination.com> wrote:
> >>
> >>
> >> so agreeing pretty much with everyone else, here are my $0.02 :
> >>
> >>
> >>
> >> ---------------------------- Original Message ----------------------------
> >> Subject: Re: [music-dsp] Anyone think about using FPGA for Midi/audio sync 
> >> ?
> >>
> From: "David Olofson" <da...@olofson.net>
> >> Date: Sat, August 13, 2016 6:16 pm
> >> To: "A discussion list for music-related DSP" 
> >> <music-dsp@music.columbia.edu>
> >> --------------------------------------------------------------------------
> >>
> >> >
> >> > As to MIDI, my instinctive reaction is just, why even bother? 31250
> >> > bps. (Unless we're talking "high speed" MIDI-over-USB or something.)
> >> > No timestamps. You're not going to get better than ms level accuracy
> >> > with that, no matter what. All you can hope to there, even with custom
> >> > hardware, is avoid to make it even worse.
> >> >
> >> > BTW, I believe there already are MIDI interfaces with hardware
> >> > timestamping. MOTU Timepiece...?
> >> >
> >> > Finally, how accurate timing does one really need?
> >> >
> >>
> >> first of all, the time required for 3 MIDI bytes (1 Status Byte and 2 Data 
> >> Bytes) is about 1 millisecond. at least for MIDI 1.0 (5-pin DIN MIDI, i 
> >> dunno what it is for USB MIDI). so there is that minimum delay to start 
> >> with.
> >>
> >> and, say for a NoteOn (or other "Channel Voice Message" in the MIDI 
> >> standard), when do you want to calculate the future time stamp? based on 
> >> the time of arrival of the MIDI Status Byte (the first UART byte)? or 
> >> based on the arrival of the final byte of the completed MIDI message? what 
> >> if you base it on the former (which should lead to the most constant 
> >> key-down to note-onset delay) and, for some reason, the latter MIDI Data 
> >> Bytes don't arrive during that constant delay period? then you will have 
> >> to put off the note onset anyway, because you don't have all of the 
> >> information you need to define the note onset.
> >>
> >> so i agree with everyone else that a future time stamp is not needed, even 
> >> if the approximate millisecond delay from key-down to note-onset is not 
> >> nailed down.
> >>
> >> the way i see it, there are 3 or 4 stages to dealing with these MIDI 
> >> Channel Voice Messages:
> >>
> >> 1. MIDI status byte received, but the MIDI message is not yet complete. 
> >> this is a your MIDI parser working like a state machine. email me if you 
> >> want C code to demonstrate this.
> >>
> >> 2. MIDI message is complete. now you have all of the information about the 
> >> MIDI NoteOn (or Control message) and you have to take that information 
> >> (the MIDI note number and key velocity) and from that information and 
> >> other settings or states of your synth, you have to create (or change an 
> >> existing "idle" struct to "active") a new "Note Control Struct" which is a 
> >> struct (or object, if you're C++) that contains all of the parameters and 
> >> states of your note while it proceeds or evolves in time (ya know, that 
> >> ADSR thing). once the Note Control Struct is all filled out, then your 
> >> note can begin at the next sampling instance (or sample block interrupt, 
> >> if you're buffering your samples in blocks of 8 or 16 or 32 samples, this 
> >> causes another 0.7 millisecond of jitter on the note onset).
> >>
> >> 3. while your note is playing, you are expecting to eventually receive a 
> >> NoteOff MIDI message for that note. when that complete MIDI message is 
> >> received, you have to find the particular Note Control Struct that 
> >> corresponds to the MIDI channel and note number and modify that struct to 
> >> indicate that the note will begin dying off. perhaps all you will do is 
> >> apply an exponentially decaying envelope to the final note amplitude, but 
> >> you *could* have an exit waveform of some sort. you can't just instantly 
> >> silence the note because that will click or pop.
> >>
> >> 4. assuming there's no "note stealing" going on, after the NoteOff message 
> >> arrived and when the Note Control Struct indicates that the note has 
> >> completely died off to an amplitude of zero, then the Note Control Struct 
> >> can be returned to an "idle" state and be ready for use for the next 
> >> NoteOn.
> >>
> >>
> >> but with MIDI mergers and such, there is a jitter of a fraction of a 
> >> millisecond just receiving the complete MIDI message anyway. as long as 
> >> your real-time synth dispatches the NoteOn and NoteOff immediately after 
> >> the MIDI message is received and before the next sample processing block 
> >> (like 8 or 16 or 32 sample periods), you may already have a delay 
> >> jitterring between 1 and maybe 2 milliseconds between key-down and 
> >> note-onset anyway. i think that should be reasonably tolerable. no?
> >>
> >> if not, then you really have to timestamp the note onset to be, say, 2 
> >> milliseconds (like 88 samples) into the future past when the MIDI Status 
> >> Byte is first received. you can do this either processing each sample, one 
> >> at a time, or if blocking samples together in 8 or 16 or 32 sample blocks.
> >>
> >> assuming no jitter in receiving the very first MIDI Status Byte (dunno 
> >> about this with Running Status), would a constant delay of 2 ms be better 
> >> than a jittery delay of 1 to 2 ms? which is better? which is more 
> >> "realistic" sounding? i confess that i really do not know.
> >>
> >>
> >> --
> >>
> >> r b-j r...@audioimagination.com
> >>
> >> "Imagination is more important than knowledge."
> >>
> >> _______________________________________________
> >> dupswapdrop: music-dsp mailing list
> >> music-dsp@music.columbia.edu
> >> https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
> >
> 
> 
> --
> 
> r b-j                      r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to