Joe,

I think some clarification is badly required about what DRTIO does and does not.

DRTIO gives you:
1) time transfer
2) low-latency low-level control of remote RTIO channels
3) an auxiliary low-bandwidth low-priority general-purpose data channel (which can be used for moninj, flashing boards, monitoring temperature, etc.)

It is *not* a general-purpose networking or distributed computing protocol.

On Tuesday, November 08, 2016 11:13 PM, Joe Britton wrote:
Crossing each switch will incur 100ns-200ns of latency

This has implications for some experiments. 10 m (10 km) fiber
propagation is 48 ns (48 us). Demonstration experiments involving
heralded entanglement of a pair of nodes (2 crates) have a low
probability of success (~1e-6) and are repeated continuously (~1 MHz).

Why does it have to be 2 crates? Are the hundreds of channels of a single crate not enough to drive a few ion traps? You'll have slow entanglement in your system at some point anyway as you plan to go long distances.

1) slower response times.
2) blocking the kernel CPU by twice the latency (round-trip) when it needs to 
enquire about the space available in a remote RTIO FIFO.

Any implementation that requires round-trip communication to complete
DRTIO is very bad due to fiber/free-space propagation delays. To first
order all DRTIO should assume receiving devices are ready to receive
and handle errors by a) reporting to master crate b) logging for
post-processing. To second order it's fine for low-traffic advisory
signaling like "FIFO 80% full." Plan for future deployments where
communication propagation delays are 100's us.

I advise against running DRTIO over such high-latency links. Even if we find all sorts of clever tricks to hide the latency in the "write to a remote FIFO" case, any branching would unavoidably require a roundtrip. Even toggling an output TTL in response to an input TTL edge would take 2x 100's us.

Instead the nodes should have more autonomy (e.g. contain their own CPUs) and the links should be just time transfer + general purpose networking, i.e. White Rabbit. (The reasons we don't do DRTIO over White Rabbit are latency, Ethernet overhead for small packets, and difficulties in prioritizing traffic)

> A current implementation using soft-core switching seems an adequate
> compromise provided the system is designed in such a way that a future
> gateware implementation is straight forward.

In anticipation of a future all-gateware implementation of DRTIO
routing is use of a dedicated soft-core CPU helpful?

Not at all.

Sébastien

_______________________________________________
ARTIQ mailing list
https://ssl.serverraum.org/lists/listinfo/artiq

Reply via email to