[Xenomai-core] [PATCH] -ETIMEDOUT return value not described in rt_cond_wait API documentation?
Hi, it seems -ETIMEDOUT is not mentioned as a possible return value of rt_cond_wait() in the API docs [1], although it *is* returned in case of the timeout having expired [2]? Attached patch tries to solve this against the 2.4 branch. regards, Klaas ps. This seems to apply to 2.3, 2.4 and trunk. [1] http://www.xenomai.org/documentation/trunk/html/api/group__cond.html#g3aa51073817be2ffb2a880a7393502e8 [2] http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/skins/native/cond.c#493Index: cond.c === --- cond.c (revision 3717) +++ cond.c (working copy) @@ -416,6 +416,9 @@ * descriptor, including if the deletion occurred while the caller was * sleeping on the variable. * + * - -ETIMEDOUT is returned if @a timeout expired before the condition + * variable has been signaled. + * * - -EINTR is returned if rt_task_unblock() has been called for the * waiting task before the condition variable has been signaled. * ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] -ETIMEDOUT return value not described in rt_cond_wait API documentation?
Klaas Gadeyne wrote: Hi, it seems -ETIMEDOUT is not mentioned as a possible return value of rt_cond_wait() in the API docs [1], although it *is* returned in case of the timeout having expired [2]? Indeed. Applied, thanks. Attached patch tries to solve this against the 2.4 branch. regards, Klaas ps. This seems to apply to 2.3, 2.4 and trunk. [1] http://www.xenomai.org/documentation/trunk/html/api/group__cond.html#g3aa51073817be2ffb2a880a7393502e8 [2] http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/skins/native/cond.c#493 ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core -- Philippe. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Houston, we have a circular problem
Jan Kiszka wrote: Philippe Gerum wrote: Jan Kiszka wrote: Gilles Chanteperdrix wrote: On Mon, May 5, 2008 at 6:08 PM, Philippe Gerum [EMAIL PROTECTED] wrote: do_schedule_event() is the culprit when it reads the pending signals on the shared queue (XNDEBUG check for rearming the timers), A stupid suggestion: if we know that the spinlock is always locked when calling do_schedule_event, maybe we can simply avoid the lock there ? Would be the best solution - but I don't think so. After reading a bit more into the lockdep output, I think the issue is that _some_other_ task my hold the siglock and then acquire our rq_lock, but not necessarily along a similar code path we took to acquire the siglock now. Actually, this locking around the sigmask retrieval looks overkill, since we only address ptracing signals here, and those should go through the shared pending set, not through the task's private one. I.e. There should be no way to get fooled by any asynchronous update of that mask. This is a debug helper anyway, so we risk (if I got this right) at worst a spurious unfreeze of the Xenomai timers. Does not really compare to the current deadlock... As a matter of fact, we don't test any condition under the protection of this lock, so aside of the memory barrier induced on SMP, this lock does not buy us anything. Except a deadlock, that is... I will let my colleagues run the hunk below tomorrow (which works for me) - let's see if they manage to crash it again :P (they are experts in this!). Jan Index: xenomai-2.4.x/ksrc/nucleus/shadow.c === --- xenomai-2.4.x/ksrc/nucleus/shadow.c (Revision 3734) +++ xenomai-2.4.x/ksrc/nucleus/shadow.c (Arbeitskopie) @@ -2194,9 +2194,7 @@ static inline void do_schedule_event(str if (signal_pending(next)) { sigset_t pending; - spin_lock(wrap_sighand_lock(next));/* Already interrupt-safe. */ wrap_get_sigpending(pending, next); - spin_unlock(wrap_sighand_lock(next)); if (sigismember(pending, SIGSTOP) || sigismember(pending, SIGINT)) -- Philippe. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] RTAI Skin FIFO handler running as non-RT task
Hi, I've been playing with the RTAI skin as I wanted a FIFO implementation close to the one that used to exist in RTLinux. I've setup an input handler to the FIFO, and this handler is trying to acquire some Xenomai RT mutex that was previously (successfully) created by an RT task. Acquisition of the handler always fails with -EPERM. So I've checked the handler status through rtdm_in_rt_context () function and it seems that the handler is always called from a non-RT context, which obviously seems to be the reason of the permission denied result. Is there a reason for the handler not being RT ? Or any way to make it RT ? Also, I'm using Xenomai native skin everywhere in my application, except from FIFO handling which requires me to use RTAI skin for this input handler. Native skin (through pipes) do not seem to provide capabilities to set an handler. Is there a good reason for that ? And is there another way to simulate such a handler than creating a dedicated task that would be wakeup each time something is written to the FIFO ? Thanks for any help on this, Regards, Ben ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Timing Issues on x86_32 SMP
On Tue, May 6, 2008 at 10:11 AM, Benjamin ZORES [EMAIL PROTECTED] wrote: Hi, I'm currently running an x86_32 SMP system and facing some issues with periodic tasks. I'd like to get a bit more information on a few assumptions I've made. Quick sum-up of my setup: - adeos-ipipe-2.6.23-i386-1.12-03 - xenomai-2.4.3.patch - Core 2 Duo x86_32 running in SMP Why not x86_64 ? - Linux 2.6.23.17 - Timer frequency: 1000 Hz - Tick-less mode activated (CONFIG_NO_HZ) - Xenomai Periodic Timing enabled (CONFIG_XENO_OPT_TIMING_PERIODIC) - tasks are kernel-based. I'm trying to schedule a periodic task each 8ms and I'm running into period miss (loosing from 5 up to 75, which is quite A LOT). The problem however doesn't appear on UP system (or at least when booting the kernel with maxcpus=1 ). Have you run the latency test to know if you have no hardware issue ? -- Gilles ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Timing Issues on x86_32 SMP
Gilles Chanteperdrix a écrit : On Tue, May 6, 2008 at 10:11 AM, Benjamin ZORES [EMAIL PROTECTED] wrote: Why not x86_64 ? Cause I don't need 64 bits. Have you run the latency test to know if you have no hardware issue ? Do you have a quick way to start this test ? Regards, Ben ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Timing Issues on x86_32 SMP
On Tue, May 6, 2008 at 11:27 AM, Benjamin ZORES [EMAIL PROTECTED] wrote: Gilles Chanteperdrix a écrit : On Tue, May 6, 2008 at 10:11 AM, Benjamin ZORES [EMAIL PROTECTED] wrote: Why not x86_64 ? Cause I don't need 64 bits. Have you run the latency test to know if you have no hardware issue ? Do you have a quick way to start this test ? cd /usr/xenomai/testsuite/latency ./runt -- Gilles ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core