On Mon, Nov 10, 2014 at 07:29:58PM +0100, Jan Kiszka wrote:
> On 2014-11-10 16:56, Gilles Chanteperdrix wrote:
> > On Mon, Nov 10, 2014 at 03:52:41PM +0100, Jan Kiszka wrote:
> >> On 2014-11-10 13:43, Gilles Chanteperdrix wrote:
> >>> On Mon, Nov 10, 2014 at 09:08:47AM +0000, Stoidner, Christoph wrote:
> >>>>
> >>>> Hi Gilles,
> >>>>
> >>>>> Do you have the same message with exactly the same kernel
> >>>>> configuration, only with CONFIG_XENOMAI and CONFIG_IPIPE disabled?
> >>>>
> >>>> When CONFIG_XENOMAI and CONFIG_IPIPE are disabled the message does not 
> >>>> appear on boot-up.
> >>>>
> >>>>> Do you have FCSE enabled? If yes, did you try disabling it? same
> >>>>> with unlocked context switch.
> >>>>
> >>>> FCSE is already disabled at all.
> >>>>
> >>>> Do you have an idea how to overcome the problem?
> >>>
> >>> I am not sure the lockdep message really is a problem. lockdep could
> >>> be confused by the fact that the hardware interrupts are not off
> >>> when running the I-pipe, or because we are missing some bit in the
> >>> I-pipe arm specific code to get it looking at the virtual mask
> >>> instead of the hardware mask.
> >>>
> >>> As for the scheduling while atomic and random segmentation fault,
> >>> you should use the I-pipe tracer, configure it with enough back
> >>> trace points, something like 1000 or 10000, and trigger a trace
> >>> freeze in the kernell code when the problem happens.
> >>>
> >>> Also, for the "scheduling while atomic", it may happen if you call
> >>> some Linux service which reschedules from primary mode, you can try
> >>> enabling I-pipe debugging, and in fact all Xenomai debugging, to try
> >>> and catch such mistakes. This is especially important if you are
> >>> running a custom skin.
> >>
> >> "Scheduling while atomic" may have the same reason why lockdep stumbles:
> >> some changes of I-pipe messe up with IRQ state tracing of Linux. I just
> >> started to look into this issue again. We tried earlier but got distracted.
> > 
> > I doubt that very much. Though I never run with lockdep, I sometimes
> > run with CONFIG_PREEMPT, and never saw this message. From what I can
> > see, the "scheduling while atomic" message is based on the
> > preempt_count only and does not use irqs_disabled() (which by the
> > way is known to work with I-pipe on ARM as well, so, if something is
> > broken, that should be something more obscure).
> 
> Let's see. I think I've identified one wrong path:
> 
> diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
> index d32f8bd..ab911f8 100644
> --- a/arch/arm/kernel/entry-header.S
> +++ b/arch/arm/kernel/entry-header.S
> @@ -198,7 +198,10 @@
>  #ifdef CONFIG_TRACE_IRQFLAGS
>       @ The parent context IRQs must have been enabled to get here in
>       @ the first place, so there's no point checking the PSR I bit.
> -     bl      trace_hardirqs_on
> +     tst     \rpsr, #PSR_I_BIT
> +     bleq    trace_hardirqs_off
> +     tst     \rpsr, #PSR_I_BIT
> +     blne    trace_hardirqs_on
>  #endif
>       .else
>       @ IRQs off again before pulling preserved data off the stack
> 
> This is probably no fix, but a with that change applied, the warning is
> gone. Now the question is what to really test for when returning here. I
> suppose we want the pipeline state of root here - should I
> __ipipe_check_root_interruptible?

This does not make sense, read the comment above that change: there
is no way an interrupt can be taken, and so entering svc_entry, with
interrupts off. Besides this is mainline code, so it would be a
problem for mainline too. We are necessarily returning to a place
where hardware irqs were on.

To me the problem is rather that we enter
trace_hardirqs_on/trace_hardirqs_off when in the xenomai domain.
We can try and fix that, but this will result in a hell of entry.S
to maintain.I would rather exit early in
trace_hardirqs_on/trace_hardirqs_off if current domain is not root.

-- 
                                            Gilles.

_______________________________________________
Xenomai mailing list
[email protected]
http://www.xenomai.org/mailman/listinfo/xenomai

Reply via email to