Philippe Gerum wrote:
I'm afraid that you have already been proven wrong by the latency figures obtained with the kernel-based schedulers of RTAI/Adeos since
Does the kernel-only scheduler only use the virtual cli/sti variants?
24.1.11. Comparing durations of cli/sti wrt primary handlers is looking at the wrong side of the problem; our problem is determinism, and as much as we can stay inside defined and acceptable time bounds, order priorities for multiple activity levels and not explode into flames when one happens to run our code on a different hw than we used to develop on, I'll be fairly happy with this approach of real-time.
Mmh, then I still don't get it. As far as I understood, Stodolsky's core idea is that a critical section protected by a irq lock is /normally/ not interrupted by any irq. Thus, the lock is only enforced when there is actually an interrupt occuring within the critical section. To me, this sounds like optimising the avarage and not the worst case.
Ok, this approach is as deterministic as the classic cli/sti: The worst case scenario is now critical section length + deferred irq call(s???). But this variant is also in no way MORE deterministic than the classic one.
I just did a quick experiment on vesuvio with Marc's skin on a PI 266-MMX (text-only, no disk access, only ping -f and some user mode load): replacing rtai_local_irq_save/rtai_local_irq_restore with rtai_hw_lock/rtai_hw_unlock improved the situation a bit. The maximum jitter decreased from about 85 to 75 us. So, it seems that this mechanism has an effect, but it is also not the dominating one. As you said, the cache locality of code and data is likely a bit worse with xenomai+skin compared to lxrt. I hope this is not too much due to the layer concept.
Jan
