Hey, Scott --

You may run into unpleasant surprises if you assume a PC processor uses
deterministic time under any circumstances.  Common knowledge
<http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing>
dictates that synchronization primitives, disk access and memory
allocations will give you a world of hurt -- I will assume you're familiar
with those principles.  The more insidious danger is caching -- even
without the OS troubling you, if anything in the system is liable to have
you working outside of a *very *small area of memory (often in the range of
256B - 4KB depending on the CPU), you're going to have L1 and L2 cache hits
and misses which will cause dramatic fluctuations in time usage.

Generally speaking, writing high-performance DSP for modern CPUs is a
matter of optimizing for the worst case.  Buffering improves cache
performance quite a lot, which is why CPU usage rises dramatically as
buffer-size falls -- and also why you almost never see buffers smaller than
about 16 samples.

Evan Balster
creator of imitone
http://imitone.com

On Mon, Feb 1, 2016 at 9:12 AM, Theo Verelst <theo...@theover.org> wrote:

> Scott Gravenhorst wrote:
>
>> I'm looking for advice regarding the design of a MIDI synthesizer.
>> ...
>>
>> Advice regarding this endeavor would be appreciated.  Questions include
>> which of the
>> transfer methods will come closest to my goal of low latency (from time
>> to MIDI message
>> receipt to ..
>>
>
> Hi Scott,
>
> Good to hear you're working with the new PI, it seems you've answered a
> lot of your own questions already!
>
> I haven't looked into everything about the time slicer, memory management
> (+ associated kernel sources about page and segment management) and the
> connections of the various hardware controls with the RPI cores and the
> various memory bus and other kinds of contentions going on, I do recall
> from the Parallella forums that some one was using a Linux boot command
> alteration to make the 2 core Zynq into essentially a 1-core Linux+ free
> core, if you want I could look that up for you.
>
> My experiments with cors and Midi (like my long ago DSO based design
> http://www.theover.org/Synth ) is very good in the sense of getting low
> latency going, so I'd have some interest myself in connecting the graphics
> accelerated, 4-Usb port RPi sensibly to (FPGA based) synthesis modules. It
> seems to me, given that any response speed up to per-sample accurate (i.e.
> one audio sample delay between receipt of message and starting a tone)
> could be your target, the following setup might be interesting to think
> about (not necessarily to implement).
>
> A (or more) midi message(s), or a message coming from a relatively simple
> piece of FPGA that much quicker than MIDI scans a musical keyboard (I have
> some old keyboards lying around that I wouldn't mind turbo charging) could
> be time stamped by FPGA, relatively (but not super fast needed) send to for
> instance the RPI (or a Zynq based linux process, or like in my case a
> classic version 1 RPI) , processed including the time stamp, send back to
> the FPGA or send to a software module, and then WITH FIXED DELAY played
> into the chosen audio stream. That way, latency can be small, but not near
> zero, which in a real time OS is harder, but constant.
>
> Theo V.
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to