On 15-11-08 09:45 AM, Gilles Chanteperdrix wrote:
On Sun, Nov 08, 2015 at 09:36:16AM -0800, Mathieu Rondonneau wrote:
On 15-11-08 09:26 AM, Gilles Chanteperdrix wrote:
On Sun, Nov 08, 2015 at 09:20:47AM -0800, Mathieu Rondonneau wrote:
On 15-11-08 09:10 AM, Gilles Chanteperdrix wrote:
On Sun, Nov 08, 2015 at 09:04:38AM -0800, Mathieu Rondonneau wrote:
On 15-11-08 12:42 AM, Gilles Chanteperdrix wrote:
On Sat, Nov 07, 2015 at 09:55:48PM -0800, Mathieu Rondonneau wrote:
On 15-11-04 10:08 PM, Mathieu Rondonneau wrote:
On 15-11-04 04:51 PM, Mathieu Rondonneau wrote:
On 15-11-03 11:04 PM, Gilles Chanteperdrix wrote:
On Tue, Nov 03, 2015 at 07:18:37PM -0800, Mathieu Rondonneau wrote:
On 15-11-02 07:15 AM, Philippe Gerum wrote:
On 11/02/2015 06:23 AM, Mathieu Rondonneau wrote:
On 15-11-01 07:21 PM, Mathieu Rondonneau wrote:
On 15-11-01 06:58 PM, Mathieu Rondonneau wrote:
On 15-10-31 08:58 PM, Mathieu Rondonneau wrote:
Hi,

irq handlers registered with devm_request_threaded_irq does not
get
triggered when interrupt fires.

The mmc driver uses this (can not load the rootfs).
Only the IPIPE patch is enabled.
the armctrl chipirq is triggering the .ack handler instead so the
interrupt is happening.

Any suggestion on where I should look? how is this supported by
the
ipipe layer?

Thanks,
-Mathieu

I think I might have answered my own question.
Looks like I need to use ipipe_request_irq() instead.

Regards,
-Mathieu
mmmm it is not true, it seems we still need a
ipipe_request_threaded_irq() to call the ackfn, put the handler in
the
queue and wake up the thread once handler is executed. Or user
will have
to move this functionality into their driver's IRQ handler.

It strangely looks like ipipe_request_irq's idea is similar to what
request_threaded_irq is already doing (delaying IRQ process later).

-Mathieu
Looks like my bigger problem is that the handler_level_irq is not
called. Only the timer handler is called (handler_percpu_devid_irq).

Which may mean that the regular IRQ top half for that interrupt source
is not connected to the pipeline. You may want to check whether the
irqchip handling that device's IRQs has been made aware of the
interrupt
pipeline.


Thanks for replying Philippe,
This is the think, I did change the arch_irq_default_handler to call
the
__ipipe_grab_irq and __ipipe_grab_ipi.
Is that what you are referring to when you say "interrupt source is not
connected to the pipile", that's what __ipipe_grab_xxx does isn't it?

Obviously I am still missing something.
If I use irq_set_chained_handler in the driver then the handler gets
called but it's just replacing the handler_irq by the driver one.
When I use request_irq then it does not call my handler (the
action.handler one).
I will keep looking.

This is explained here:
http://xenomai.org/2014/09/porting-xenomai-dual-kernel-to-a-new-arm-soc/#GPIOs_as_interrupt_sources


and here:
http://xenomai.org/2014/09/porting-xenomai-dual-kernel-to-a-new-arm-soc/#CONFIG_MULTI_IRQ_HANDLER



Chained handlers have to call ipipe_handle_demuxed_irq instead of
generic_handle_irq.
And the arch irq handler has to call ipipe_handle_multi_irq or
ipipe_handle_multi_ipi instead of handle_IRQ


Thanks Gilles for the pointers,
MULTI_IRQ_HANDLER is not enabled for RPI2 (so I did not look at the
multi_irq_handler section), is it a requirement for the ipipe patch to
work?

For GPIO I got it to work, but it is easier to me because they have
their own irqchip.
It is the arch irq handler that cause me trouble.

Maybe I just have to enable MULTI_IRQ_HANDLER.
I will play around with it.

Thanks again for your feedback, it helps,
Regards,
-Mathieu


Also confirming that the ipipe_handler_multi_irq and
ipipe_handler_multi_ipi are called properly.
so it looks like the arch irq handler is working as expected.

it is the handle_level_irq that is not called properly when I use the
request_irq in the driver.
the desc->irq_data.chip->irq_ack is called properly but not the
irqaction->handler (that is the handler of my driver) that should have
been triggered by handle_level_irq.

So I think my problem is that handle_level_irq() is not called.
if I register my driver handler with irq_set_chained_handler then
ipipe_handler_multi_irq is still called, but the
desc->irq_data.chip->irq_ack does not get called and my driver handle
gets called properly.
In this case, the ipipe is bypassed it seems since if I understand
correctly, the irq_ack is still expected to trigger.

Maybe I am missing a ipipe_handle_demuxed_irq call in an interrupt
handler that gets called before my driver interrupt handler gets a
chance to get called.

I have to look at what interrupt happens before mine.

-Mathieu

more info:
1) interrupt (lets say #I) triggers, the ipipe_dispatch_irq calls the
irqchip->mask() which calls ipipe_lock_irq (IPIPE_LOCK_FLAG is now
set).

it should not call irqchip->mask, but irqchip->ipipe_ack, which
should not call ipipe_lock_irq. See for instance the documentation
about fasteoi irq:
http://xenomai.org/2014/09/porting-xenomai-dual-kernel-to-a-new-arm-soc/#flow_handler


Gilles, thanks again for you response,
the code (and execution path observed) does not seem to match with what
you are saying (please don't take this the wrong way :)), please help
clarifying. See explanation below:

    From the doc:
"the irq_mask handler should call ipipe_lock_irq before accessing the
interrupt controller registers"
So I did put a ipipe_lock_irq in the irq_mask.

The documentation also says:
"If the flow handler is “handle_fasteoi_irq” the implementation of
the “struct irq_chip” members should be modified:"
and
    an irq_hold handler should be added (when CONFIG_IPIPE is enabled)
    having the same effect as the irq_mask handler (but without the
    call to ipipe_lock_irq), and the irq_eoi handler.

In which case the irq_hold handler is called upon handling the
interrupt.

You can not just apply half of what the documentation says and hope
to have a working system.


But I don't use handle_fasteoi_irq, I use handle_level_irq.

Exactly my point.


ok so you are saying, that for handle_level_irq, do not add the
ipipe_lock_irq in the irq_mask?
Can you confirm this?
Does it mean for handle_level_irq, the irq_hold and irq_release are not
needed as well.
Ok this is the part I did not understood from the doc then. I think this
could be clarified in the doc. (more clarification does not hurt).

The doc looks clear enough to me. It says:

if condition, then
    do 1
    do 2
    do 3

if condition is not true, there is no reason to do 1, 2, or 3.

perhaps you want me to add

else
    do nothing
?

As a C programmer, do you put,
     else
        ;
To all your ifs with no else ?


Maybe I was not expecting the doc to be structured in a coding language way.
But that's ok, now I know it, the doc make sense :)

the doc focus on the handle_fasteoi_irq (and briefly handle_edge_irq), there are other flow handler that could use some more porting details:

if (handle == handle_percpu_irq ||
        handle == handle_percpu_devid_irq) {
        if (irq_desc_get_chip(desc) &&
                irq_desc_get_chip(desc)->irq_hold) {
                        desc->ipipe_ack = __ipipe_ack_fasteoi_irq;
                        desc->ipipe_end = __ipipe_end_fasteoi_irq;
                } else {
                        desc->ipipe_ack = __ipipe_ack_percpu_irq;
                        desc->ipipe_end = __ipipe_nop_irq;
                }

In the RPI2 case, it needs to have two chipirq, one for the handle_percpu_devid_irq and one for the handle_level_irq.

reading the doc, I have assumed (wrongly) that the handle_level_irq could also be using the same chipirq than the handle_percpu_devid_irq since the doc did not say otherwise.

For the RPI2, both handle_level_irq and handle_percpu_devid_irq are using the same chipirq already so when I ported ipipe to it, nothing was mentioned for handle_percpu_devid_irq which I learned by reading the code (see copy past above).

So the part of the doc that says: "if the flow handler is “handle_edge_irq”, and the systems locks up when the first interrupt happens, try replacing “handle_edge_irq” with “handle_level_irq”.

could use some additional clarification as follow: "..., try replacing “handle_edge_irq” with “handle_level_irq” since those should not have any call to ipipe_lock_irq/ipipe_unlock_irq in their respective irqchip handler irq_mask/irq_unmask".

Also, a mention to handle_percpu_devid_irq could be added as follow:
"If the flow handler is “handle_fasteoi_irq” or "handle_percpu_devid_irq" or "handle_percpu_irq", the implementation of the “struct irq_chip” members should be modified".

Just a suggestion.
I just fell into a crack and I am proposing a change so that others don't do the same mistake.



Just to confirm, you were saying "it should not call irqchip->mask, but
irqchip->ipipe_ack".
But for handle_level_irq the mask and the ack are called.
Is that correct?

The answer is in the implementation of ipipe_ack for level
interrupts. It calls mask_ack_irq.

Correct, so it does call mask, but you said it does not earlier.


I think understand what my mistake was.
I have not tested it yet but without the irq_lock, it should call the
handler.
Thank you Gilles again!

You are welcome. I suggest however to take the time to read the
documentation completely from beginning to end, before jumping
head-on in the port, only reading parts of it, and wasting time
stupidly because you only read part of a sentence.

It was not a waste of time to me, I got the opportunity to look at the internal of the ipipe code to understand what my problem was.
I see this has a learning exercise (at spare time).

The code is well written.
Writing a porting guide is difficult because it does not cover newer targets implementation. The document could still evolve with user's input. After all, I think the goal is to encourage users to create more port.

Side note: with two irqchip (one for handle_level_irq that does not lock_irq in mask and the one for handle_percpu_devid_irq that does the lock_irq in mask()) it does work.

Thanks,
-Mathieu

_______________________________________________
Xenomai mailing list
[email protected]
http://xenomai.org/mailman/listinfo/xenomai

Reply via email to