> >
> >     BEFORE
> >
> >     static void openpic_end_irq(unsigned int irq_nr)
> >     {
> >     if (!(irq_desc[irq_nr].status & (IRQ_DISABLED|IRQ_INPROGRESS))
> >         && irq_desc[irq_nr].action)
> >         openpic_enable_irq(irq_nr);
> >     }
> >
> >
> >     AFTER
> >
> >     static void openpic_end_irq(unsigned int irq_nr)
> >     {
> >     if (!ipipe_root_domain_p()
> >         &&
> > !test_bit(IPIPE_DISABLED_FLAG,&ipipe_current_domain->irqs[irq_nr].control))
> >         return;
> >
>
> - !test_bit(IPIPE_DISABLED_FLAG,&ipipe_current_domain->irqs[irq_nr].control))
> + test_bit(IPIPE_DISABLED_FLAG,&ipipe_current_domain->irqs[irq_nr].control))
>
> ?

Yep.


> Additionally, there is another issue we discussed once with Anders, which is
> related to not sending EOI twice after the shared IRQ already ended by a RT domain
> has been fully propagated down the pipeline to Linux;
> some kind of test_and_clear_temporary_disable flag, would do, I guess. The other way would be
> to test_and_set some "ended" flag for the outstanding IRQ when the ->end() routine
> is entered, clearing this flag before pipelining the IRQ in __ipipe_walk_pipeline().
>
> Actually, I'm now starting to wonder why we would want to permanently disable an
> IRQ line from a RT domain, which is known to be used by Linux.
> Is this what IPIPE_DISABLED_FLAG is expected to be used for, or is it only there to handle the
> transient disabled state discussed above?

Why "permanently"? I would see the following scenario - an ISR wants to _temporary_ defer an IRQ line enabling
until some later stage (e.g. rt_task which is a bottom half).
This is the only reason why xnarch_end_irq() or some later step in it (in this case ->end() ) must be aware
of IPIPE_DISABLED_FLAG.

Why the currently used approach is not that good for it (NOENABLE) ?

1)   it actually defers (for some PICs) not only "enabling" but sending an EOI too;
    As a consequence :

2)   rthal_end_irq() (on PPC and not just xnintr_enable() or rthal_enable_irq()) must be called in bottom half
    to re-endable the IRQ line;

3)   does not co-exist well with the shared interrupts support (I don't mean sharing between RT and not-RT doamins here).
    Although it's not a common case if a few ISRs on the same shared line want to defer enabling, esp. for
    real-time domain;

4)   it's a bit and if we would like to use only scalar values one day then something
    like HANDLED, HANDLED_NOENABLE would be needed;

The support for nested irq enable/disable calls would resolve all the "restrictions" above but the question is
whether we really need to resolve them.

In the same vein, I'd like to know you vision of the "nested irq enable/disable calls support". Any use cases?

>
> >
> >     if (!ipipe_root_domain_p() || !(irq_desc[irq_nr].status &
> > (IRQ_DISABLED|IRQ_INPROGRESS))
> >         && irq_desc[irq_nr].action)
> >         openpic_enable_irq(irq_nr);
> >     }
> >
>
> There is another way for most archs, which is to add such code to the ->end()
> routine override in ipipe-root.c; this would be simpler and safer than fixing such
> routine for each and every kind of interrupt controller. x86 directly pokes into
> the PIC code and does not overrides IRQ control routines, though.

I didn't know about them as I mostly looked at the x86 implementation.

This gives as control over per-domain irq locking/unlocking (ipipe_irq_lock/unlock()),
__ipipe_std_irq_dtype[irq].end() is always called unconditionally (as a result, .enable() for some PICs).
That said, the IRQ line is actually on, the interrupts are just not handled but accumulated in the pipe-log.

Actually, why is ipipe_irq_unlock(irq) necessary in __ipipe_override_irq_end()? ipipe_irq_lock() is not
called in __ipipe_ack_irq(). Is it locked somewhere else? At least, I haven't found explicit ipipe_irq_lock()
or __ipipe_lock_irq() calls anywhere else.
 

> [skip-skip-skip]
>
> >
> >  >From another point of view, this new feature seems not to be too
> > intrusive and not something really affecting the "fast path" so it could
> > be used by default, I guess. Or maybe we don't need it at all?
> >
>
> The disable nesting count at Xenomai level is needed to let ISRs act independently
> from each others wrt interrupt masking - or at the very least, to let them think
> they do. This is strictly a Xenomai issue, not an Adeos one. If we keep it at this
> level, then using the xnintr struct to store the nesting counter becomes an
> option, I guess.

Let's start from defining possible use cases with nested irq enable/disable calls then.
Maybe we just don't need them at all (at least the same way Linux deals with them)?

>
> --
>
> Philippe.


--
Best regards,
Dmitry Adamushko
_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to