Re: [Xenomai-core] Missing IRQ end function on PowerPC

2006-01-24 Thread Wolfgang Grandegger

Gilles Chanteperdrix wrote:

Wolfgang Grandegger wrote:
  Therefore we need a dedicated function to re-enable interrupts in the 
  ISR. We could name it *_end_irq, but maybe *_enable_isr_irq is more 
  obvious. On non-PPC archs it would translate to *_irq_enable. I 
  realized, that *_irq_enable is used in various place/skins and therefore 
  I have not yet provided a patch.


The function xnarch_irq_enable seems to be called in only two functions,
xintr_enable and xnintr_irq_handler when the flag XN_ISR_ENABLE is set.

In any case, since I am not sure if this has to be done at the Adeos
level or in Xenomai, we will wait for Philippe to come back and decide.


Attached is a temporary Xenomai patch fixing the IRQ end problem for the 
PowerPC arch. I had a closer look to the various IRQ end functions on 
PowerPC:


  ic_end(unsigned int irq)
  {
ic_ack(irq);
if (!(irq_desc[irq].status  (IRQ_DISABLED | IRQ_INPROGRESS))) {
ic_enable(irq);
}
  }

In most cases the end functions do the same than the begin functions but 
there are exceptions where the end functions do an additional ic_ack()

as shown above.

Wolfgang.


+ diff -u xenomai/include/asm-generic/hal.h.IRQEND xenomai/include/asm-generic/hal.h
--- xenomai/include/asm-generic/hal.h.IRQEND	2006-01-11 18:03:34.0 +0100
+++ xenomai/include/asm-generic/hal.h	2006-01-19 20:52:40.0 +0100
@@ -357,6 +357,8 @@
 
 int rthal_irq_disable(unsigned irq);
 
+int rthal_irq_end(unsigned irq);
+
 int rthal_irq_host_request(unsigned irq,
 			   irqreturn_t (*handler)(int irq,
 		  void *dev_id,
+ diff -u xenomai/include/asm-generic/system.h.IRQEND xenomai/include/asm-generic/system.h
--- xenomai/include/asm-generic/system.h.IRQEND	2006-01-11 18:03:34.0 +0100
+++ xenomai/include/asm-generic/system.h	2006-01-19 20:50:17.0 +0100
@@ -496,6 +496,12 @@
 return rthal_irq_disable(irq);
 }
 
+static inline int xnarch_end_irq (unsigned irq)
+
+{
+ return rthal_irq_end(irq);
+}
+
 static inline void xnarch_chain_irq (unsigned irq)
 
 {
+ diff -u xenomai/include/asm-uvm/system.h.IRQEND xenomai/include/asm-uvm/system.h
--- xenomai/include/asm-uvm/system.h.IRQEND	2006-01-11 18:03:34.0 +0100
+++ xenomai/include/asm-uvm/system.h	2006-01-19 20:51:36.0 +0100
@@ -296,6 +296,13 @@
 return -ENOSYS;
 }
 
+static inline int xnarch_end_irq (unsigned irq)
+
+{
+return -ENOSYS;
+}
+
+
 static inline void xnarch_chain_irq (unsigned irq)
 
 { /* Nop */ }
+ diff -u xenomai/ksrc/arch/generic/hal.c.IRQEND xenomai/ksrc/arch/generic/hal.c
--- xenomai/ksrc/arch/generic/hal.c.IRQEND	2006-01-11 18:03:42.0 +0100
+++ xenomai/ksrc/arch/generic/hal.c	2006-01-19 20:54:06.0 +0100
@@ -1156,6 +1156,7 @@
 EXPORT_SYMBOL(rthal_irq_release);
 EXPORT_SYMBOL(rthal_irq_enable);
 EXPORT_SYMBOL(rthal_irq_disable);
+EXPORT_SYMBOL(rthal_irq_end);
 EXPORT_SYMBOL(rthal_irq_host_request);
 EXPORT_SYMBOL(rthal_irq_host_release);
 EXPORT_SYMBOL(rthal_irq_host_pend);
+ diff -u xenomai/ksrc/arch/powerpc/hal.c.IRQEND xenomai/ksrc/arch/powerpc/hal.c
--- xenomai/ksrc/arch/powerpc/hal.c.IRQEND	2006-01-11 18:03:41.0 +0100
+++ xenomai/ksrc/arch/powerpc/hal.c	2006-01-19 21:56:19.0 +0100
@@ -356,6 +356,27 @@
 return 0;
 }
 
+int rthal_irq_end (unsigned irq)
+
+{
+if (irq = IPIPE_NR_XIRQS)
+	return -EINVAL;
+
+if (rthal_irq_descp(irq)-handler != NULL)
+{
+	if (rthal_irq_descp(irq)-handler-end != NULL)
+	rthal_irq_descp(irq)-handler-end(irq);
+	else if (rthal_irq_descp(irq)-handler-enable != NULL)
+	rthal_irq_descp(irq)-handler-enable(irq);
+	else
+	return -ENODEV;
+	}
+else
+	return -ENODEV;
+
+return 0;
+}
+
 static inline int do_exception_event (unsigned event, unsigned domid, void *data)
 
 {
+ diff -u xenomai/ksrc/nucleus/intr.c.IRQEND xenomai/ksrc/nucleus/intr.c
--- xenomai/ksrc/nucleus/intr.c.IRQEND	2006-01-11 18:03:42.0 +0100
+++ xenomai/ksrc/nucleus/intr.c	2006-01-19 20:42:53.0 +0100
@@ -363,7 +363,7 @@
 ++intr-hits;
 
 if (s  XN_ISR_ENABLE)
-	xnarch_enable_irq(irq);
+	xnarch_end_irq(irq);
 
 if (s  XN_ISR_CHAINED)
 	xnarch_chain_irq(irq);
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] racy xnshadow_harden under CONFIG_PREEMPT

2006-01-24 Thread Jan Kiszka
Dmitry Adamushko wrote:
 On 23/01/06, Gilles Chanteperdrix [EMAIL PROTECTED] wrote:
 Jeroen Van den Keybus wrote:
 Hello,
 
 
 
 [ skip-skip-skip ]

 
 
 Since in xnshadow_harden, the running thread marks itself as suspended
 before running wake_up_interruptible_sync, the gatekeeper will run when
 schedule() get called, which in turn, depend on the CONFIG_PREEMPT*
 configuration. In the non-preempt case, the current thread will be
 suspended and the gatekeeper will run when schedule() is explicitely
 called in xnshadow_harden(). In the preempt case, schedule gets called
 when the outermost spinlock is unlocked in wake_up_interruptible_sync().
 
 
 In fact, no.
 
 wake_up_interruptible_sync() doesn't set the need_resched flag up. That's
 why it's sync actually.
 
 Only if the need_resched was already set before calling
 wake_up_interruptible_sync(), then yes.
 
 The secuence is as follows :
 
 wake_up_interruptible_sync --- wake_up_sync --- wake_up_common(...,
 sync=1, ...) --- ... --- try_to_wake_up(..., sync=1)
 
 Look at the end of  try_to_wake_up() to see when it calls resched_task().
 The comment there speaks for itself.
 
 So let's suppose need_resched == 0 (it's per-task of course).
 As a result of wake_up_interruptible_sync() the new task is added to the
 current active run-queue but need_resched remains to be unset in the hope
 that the waker will call schedule() on its own soon.
 
 I have CONFIG_PREEMPT set on my machine but I have never encountered a bug
 described by Jan.
 
 The catalyst of the problem,  I guess, is  that some IRQ interrupts a task
 between wake_up_interruptible_sync() and schedule() and its ISR, in turn,
 wakes up another task which prio is higher than the one of our waker (as a
 result, the need_resched flag is set). And now, rescheduling occurs on
 return from irq handling code (ret_from_intr - ...- preempt_irq_schedule()
 - schedule()).

Yes, this is exactly what happened. I unfortunately have not saved a
related trace I took with the extended ipipe-tracer (the one I sent ends
too early), but they showed a preemption right after the wake_up, first
by one of the other real-time threads in Jeroen's scenario, and then, as
a result of some xnshadow_relax() of that thread, a Linux
preempt_schedule to the gatekeeper. We do not see this bug that often as
it requires a specific load and it must hit a really small race window.

 
 Some events should coincide, yep. But I guess that problem does not occur
 every time?
 
 I have not checked it yet but my presupposition that something as easy as :
 
 preempt_disable()
 
 wake_up_interruptible_sync();
 schedule();
 
 preempt_enable();

It's a no-go: scheduling while atomic. One of my first attempts to
solve it.

The only way to enter schedule() without being preemptible is via
ACTIVE_PREEMPT. But the effect of that flag should be well-known now.
Kind of Gordian knot. :(

 
 
 could work... err.. and don't blame me if no, it's some one else who has
 written that nonsense :o)
 
 --
 Best regards,
 Dmitry Adamushko
 

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] About scheduling routine and xnpod_announce_tick

2006-01-24 Thread Germain Olivier
Hello

My question is about the scheduling routine:
I want to know if the function xnpod_announce_tick is called at every tick
of the timer (such I suppose because it is linked with the timer), so it
will be a good place to do some EDF scheduling stuff.

Thanks

Germain

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core