Hi,

I've done some quick testing on the sustained performance that the
system has when pushing RTIO switch commands with some minimal
processing between them (increase the time and manage the loop).
It tured out that a new switch command can be programmed every 1.7us
(i.e. a continuous square wave, which contains both switch on and switch
off commands, has a minimum period of 3.4 us). You can try running the
test yourself via the examples/pulse_performance.py program in the
repository.

On the KC705, the clock frequency goes from 80MHz to 125MHz, and I
expect this event processing time to scale linearly down to 1.1us.

This number is only for sustained switching, i.e. with the OpenRISC CPU
continously generating and streaming commands into the RTIO core. If the
commands are generated in advance and put into the RTIO FIFOs, there is
no such limit. On the Papilio Pro, each FIFO is 64 entries deep per channel.

Should I make it a priority to optimize this (the CPU/RTIO communication
via CSR looks like a good suspect), or is it going to be enough for the
near future?

Sustained input performance should be roughly comparable, though I have
not actually tested it yet. I have noted that the Penning lab needs
better than 0.6us ~ 2us of event processing time with up to 30k event
for PMT pulses (note that one pulse is only one event, since the RTIO
core can filter by edge type for inputs). But if only the count is
important (not the timestamps of individual pulses), it is easy to do
some count-specific software optimizations or even put the counter in
gateware.

Sébastien
_______________________________________________
ARTIQ mailing list
https://ssl.serverraum.org/lists/listinfo/artiq

Reply via email to