> Once the full, end-to-end oob chain between the app and the wire including
> the driver is enabled, we should get interesting figures (bypassing the
> softirq context entirely). Forward looking statement, I agree. Working
> on it.

Looking forward to this. If I can be any help with this I’d be glad to do it.

Thanks for the recommendations by the way, I will try to make my current setup
little bit better with what’s available to me via EVL.

> On 10 Nov 2021, at 16:57, Philippe Gerum <r...@xenomai.org> wrote:
> 
> 
> Deniz Uğur <deniz...@gmail.com> writes:
> 
>> 200-300 microseconds latency worst case is demanding, this is in the
>> same ballpark than the figures I obtained with RTnet/Xenomai3.1 between
>> two mid-range SoCs (x86_64 and i.MX6Q) attached to a dedicated switch,
>> over a 100Mbit link.
>> 
>> I assume this is only the communication latency and nothing else. As I have 
>> to 
>> read SPI which takes 50-60 us, this measurements would be higher I
>> guess.
> 
> Correct.
> 
>> 
>> Going for a kernel bypass on the rpi4 by coupling DPDK and EVL would at
>> the very least require a DPDK-enabled GENET driver, which does not seem
>> to exist.
>> 
>> That would’ve been splendid to be honest. I didn’t knew DPDK till now but
>> the premise sounds quite good.
>> 
> 
> I'm not a DPDK expert, but there are folks on this list with significant
> knowledge about DPDK who might want to discuss this.
> 
>> Another option would be to check whether you could work directly at
>> packet level for now on the rpi4, based on the EVL networking
>> layer. This is experimental WIP, but this is readily capable of
>> exchanging raw packets between peers. See [1].
>> 
>> If I understand correctly, EVL’s networking layer doesn’t support TCP/UDP
>> at the moment and I would have to implement the sliding window myself
>> with raw packets. Correct?
>> 
> 
> Correct. In userland, which would make things easier.
> 
>> —
>> 
>> To sum up, we don’t have many options considering RPi 4 with kernel 
>> bypassing.
>> 
>> I don't think the common linux stack is an option with such requirement
>> under load, especially since you would have to channel the network
>> traffic
>> 
>> Along with that, EVL’s net stack not creating any improvement for this
>> kind of load.
> 
> Mm, not sure. Even in the case EVL reuses the regular NIC drivers, I can
> already see latency improvements here (pi2 and bbb -> x86 Soc), because
> the traffic is directly injected to the driver from the TX softirq
> context, bypassing the delays induced by the regular net stack. Once the
> full, end-to-end oob chain between the app and the wire including the
> driver is enabled, we should get interesting figures (bypassing the
> softirq context entirely). Forward looking statement, I agree. Working
> on it.
> 
>> Considering these, 200-300 us is demanding but it’s as high as it gets at 
>> the moment.
>> 
> 
> -- 
> Philippe.


Reply via email to