On Wed, 2004-07-28 at 13:47, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > 
> > I'm afraid that you have already been proven wrong by the latency
> > figures obtained with the kernel-based schedulers of RTAI/Adeos since
> 
> Does the kernel-only scheduler only use the virtual cli/sti variants?
> 

Of course.

> > 24.1.11. Comparing durations of cli/sti wrt primary handlers is looking
> > at the wrong side of the problem; our problem is determinism, and as
> > much as we can stay inside defined and acceptable time bounds, order
> > priorities for multiple activity levels and not explode into flames when
> > one happens to run our code on a different hw than we used to develop
> > on, I'll be fairly happy with this approach of real-time.
> > 
> 
> Mmh, then I still don't get it. As far as I understood, Stodolsky's core 
> idea is that a critical section protected by a irq lock is /normally/ 
> not interrupted by any irq. Thus, the lock is only enforced when there 
> is actually an interrupt occuring within the critical section. To me, 
> this sounds like optimising the avarage and not the worst case.
> 

So, basically, in your views, any attempt to virtualize the IRQ handling
is doomed for real-time? How does RTLinux, RTAI over RTHAL and a bunch
of other Windows-based stuff work, then? ;o)

The fact is that Stodolsky's proposal can be used the same way for
different purposes, that's all. If this optimizes the average case
without wrecking the worst case one, but additionally allows to defer
the interrupts for whatever purpose, that's fine. Indeed.

Adeos is virtualizing the IRQ flow too, what's new with the Adeos model
is to use this feature to prioritize the incoming events among any
number of domains according to a pipeline abstraction, and not just to a
single most prioritary domain. So the additional cost compared to the
old-fashioned way is basically defined by the cost of transitioning
between multiple domains.

> Ok, this approach is as deterministic as the classic cli/sti: The worst 
> case scenario is now critical section length + deferred irq call(s???). 
> But this variant is also in no way MORE deterministic than the classic one.
> 

Who--said--that??? The purpose of Adeos has never been, is not and will
never be to pretend working faster than the hardware does! :o))

Come on... what's important is that it brings a common low level
architecture for supporting event prioritization, that improves
portability, provides a uniform interface and _behaviour_ among
different archs, and provides performances that are comparable to the
ones of the old-fashioned stuff, where you are basically immediately fed
by the interrupt vector.

Everything has a cost: if it's acceptable performance-wise like you seem
to find it out by yourself with your test on a P1, then you will likely
accept this cost to get back a much larger benefit.
Keep in mind that you could not have Marc's stuff work on Xenomai
without Adeos; the 50us more you pay now should be reduced to something
around 20us compared to LXRT by a careful investigation and proper
optimization; but even if you would have to live with 20us more
_bounded_ latency, I don't think this would prevent you from having a
properly working application, unless your constraints are so tight that
this figure would not fit. But in the latter case, x86, and its
terminally ill architecture wrt to very high determinism, is definitely
not the arch you would have chosen in the first place, I guess. 

> I just did a quick experiment on vesuvio with Marc's skin on a PI 
> 266-MMX (text-only, no disk access, only ping -f and some user mode 
> load): replacing rtai_local_irq_save/rtai_local_irq_restore with 
> rtai_hw_lock/rtai_hw_unlock improved the situation a bit. The maximum 
> jitter decreased from about 85 to 75 us. So, it seems that this 
> mechanism has an effect, but it is also not the dominating one. As you 
> said, the cache locality of code and data is likely a bit worse with 
> xenomai+skin compared to lxrt. I hope this is not too much due to the 
> layer concept.
> 

A layered approach is not bad because of the layering per se, but
because the abstraction levels are not properly defined. So the real
question is: does Xenomai has those right? I intuitively think so, but
the only thing that can solve the matter here is bringing facts, and I
intend to do so when I can pour more time into fusion, i.e. when I won't
be able to help more on vesuvio.

> Jan

> _______________________________________________
> Rtai-dev mailing list
> [EMAIL PROTECTED]
> https://mail.gna.org/listinfo/rtai-dev
-- 

Philippe.


Reply via email to