[Xenomai-core] BuildBot progress for sim, tqm860 with RTnet
Hi I was busy improving the buildbot setup and achieved the following: - added build slave for simulator - added a buildbot for a TQM860L with Denx PPC 2.4 kernel. Cannot yet build the RTnet code. - patched buildbot to show names for shell build steps instead of the commands - Hacked buildbot to improve the displayed names of the build step (e.g. "configure_xenomai" instead of "configure 2". I will look at Philippe's idea about collecting xenomai statistics later, because it demands more time and effort to implement. The following details are probably only of interest for people interested in TQM860 and/or RTnet. My setup for the TQM860 was the following: - Got Dan Kegel crosstool 0.42 - Compiled/installed the demo-ppc860 (gcc-3.4.1-glibc-2.3.3) - Used linuxppc_2_4_devel-2006-04-06-1735.tar.bz whith a TQM860L_defconfig plus XENOMAI extensions - Got RTnet trunk via SVN Could the interested parties please comment, whether this is a good combination or whether they prefer something different? If you follow the logs you are able to see each step and configuration option (except building the crosstool). configure rtnet fails with the following message: checking for RT-extension... /home/buildslave/bin/linuxppc_2_4_devel-2006-04-06-1735 (Xenomai 2.0.x) checking for Xenomai version... configure: error: *** Unsupported Xenomai version 2.1.50 in /home/buildslave/bin/linuxppc_2_4_devel-2006-04-06-1735 Is the error correct? Or should I build only against Xenomai 2.0? Best regards -- Niklaus Giger ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [draft PATCH] nested enable/disable irq calls
> Yes, I do have some remarks: due to legacy issues (I think to remember), > we have a lot of unbalanced irq-enable/disable code out there. IRQs are > currently enabled after registering a handler, but are not disabled on > detach. That's because of problems with Linux when letting it take over > a disabled IRQ, let's consider possible use cases one by one : 1) single interrupt object for a given IRQ in the primary mode I guess, this is the most wide-spread use case for the legacy code. possible problems: o unbalanced enable/disable calls - e.g. a few consecutive disable calls but a single enable call is supposed to re-enable the IRQ line afterwards. the only workaround - review and fix such code :) 2) a few interrupt objects in the primary domain. new mode so there should not be too much code that use it out there. Actually, here goes a problem. rtdm_irq_request(); // doesn't enable the IRQ line ... rtdm_irq_enable(); // explicitly enables the IRQ line the same code is called for every ISR that share this IRQ line. so how the internal counter is supposed to be used then? i.e. we should avoid printing a warning when the counter becomes 0. int xnintr_enable (xnintr_t *intr) { int ret = 0; spl_t s; xnlock_get_irqsave(&nklock,s); switch (__xnintr_depth(intr,0)) { case 0: - xnlogerr("xnintr_enable() : depth == 0. " -"Unbalanced enable/disable calls for IRQ%d!\n",intr->irq); break; case 1: ret = xnarch_irq_enable(intr->irq); default: __xnintr_depth(intr,-1); } xnlock_put_irqrestore(&nklock,s); return ret; } and, in fact, in the shared mode a driver can't rely on the fact that the IRQ line is still disabled after attaching to the IRQ line. driver 2: rtdm_irq_request(..., SHARED, ...); // we have attached to the shared IRQ line which is already enabled ... // this code can't expect that it's executed with the IRQ line off. rtdm_irq_enable(); // explicitly enables the IRQ line does it make sense to enable the IRQ line when attaching to the line? rtdm_irq_request(); // the line is already enabled 3) the IRQ line is shared across domains (both primary and linux). yep, new interface doesn't fit this case well. Maybe, it would be possible to use the same "counter" accounting scheme for xnintr_enable/disable() in the primary domain and enable_irq/disable_irq() in the linux one. > Another thing I have on my mind ATM is providing an additional IRQ > model: threaded IRQs. This is certainly not the best default model, but > it could help in certain scenarios to reduce prio-inversions due to > overloaded IRQ handler jobs (like we face from time to time with devices > on the slow ISA-bus...). In the simplest case, I guess, one may just defer part of the work to the bottom half - a thread of a given priority. And bottom halves (threads) for different ISRs may have different priorities (== thread's priority). Concerning the general support, RTDM layer would be likely preferable but I'm not sure that all necessary bits are available on this layer. e.g. it would be better to run the following code from xnintr_shirq_handler() ... while (intr) { s |= intr->isr(intr) & XN_ISR_BITMASK; ++intr->hits; intr = intr->next; } ... already from the thread context. -- Best regards, Dmitry Adamushko ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] nucleus vs. posix registry
Hi, I noticed that currently the nucleus registry is not used by the posix skin, only by the native, vxworks, and vrtx skins. But as it's always on in case of XENO_OPT_PERVASIVE, this may introduce about 9.7k (x86) unused code (+ also some data). Why not letting the skins select from the nucleus what they need? [patch may follow when time permits] Jan signature.asc Description: OpenPGP digital signature ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Missing extern "C" { in include/posix/fcntl.h
Meier, Hans wrote: > Hi everybody, > > Obviously a > > #ifdef __cplusplus > extern "C" { > #endif > > (only the first half of it, the closing '}' exists) is missing in > include/posix/fcntl.h, see the patch below. > > The problem still exists in 2.1.0 as well as in trunk. Patch applied, thanks. -- Gilles Chanteperdrix. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] Missing extern "C" { in include/posix/fcntl.h
Hi everybody, Obviously a #ifdef __cplusplus extern "C" { #endif (only the first half of it, the closing '}' exists) is missing in include/posix/fcntl.h, see the patch below. The problem still exists in 2.1.0 as well as in trunk. Hans --- include/posix/fcntl.h 2006-04-12 10:24:14.0 +0200 +++ include/posix/fcntl.h 2006-04-12 10:24:47.0 +0200 @@ -40,6 +40,10 @@ #include #include_next +#ifdef __cplusplus +extern "C" { +#endif + int __real_open(const char *path, int oflag, ...); #ifdef __cplusplus ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [draft PATCH] nested enable/disable irq calls
Hi Jan, I'll read up more carefully you proposalsa bit later, just want to note now that the code I have posted (and actually the current shirq code) have a few nasty (hidden maybe for a first glance) synch. related bugs. brrr... although one will not encounter them when RTDM is used for driver development but may happilly find himslef in trouble using the native or posix skin. I'm finding a way to solve them more gracefully now. -- Best regards, Dmitry Adamushko ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [REQUEST] eliminate the rthal_critical_enter/exit() from rthal_irq_request()
Hi, the following question/suggestion : it could be good to eliminate the use of rthal_critical_enter/exit() from rthal_irq_request() if it's not strictly necessary. the proposal : int rthal_irq_request (unsigned irq, rthal_irq_handler_t handler, rthal_irq_ackfn_t ackfn, void *cookie) { -unsigned long flags; int err = 0; if (handler == NULL || irq >= IPIPE_NR_IRQS) return -EINVAL; -flags = rthal_critical_enter(NULL); - -if (rthal_irq_handler(&rthal_domain, irq) != NULL) - { - err = -EBUSY; - goto unlock_and_exit; - } - err = rthal_virtualize_irq(&rthal_domain, irq, handler, cookie, ackfn, - IPIPE_DYNAMIC_MASK); + IPIPE_DYNAMIC_MASK|IPIPE_EXCLUSIVE_MASK); - unlock_and_exit: - -rthal_critical_exit(flags); - return err; } IPIPE_EXCLUSIVE_MASK causes a -EBUZY error to be returned by ipipe_virtualize_irq() when handler != NULL and the current ipd->irqs[irq].handler != NULL. (IPIPE_EXCLUSIVE_MASK is actually not used at the moment anywere, though ipipe_catch_event() is mentioned in comments). Another variant : ipipe_virtualize_irq() should always return -EBUZY when handler != NULL and the current ipd->irqs[irq].handler != NULL, not taking into account the IPIPE_EXCLUSIVE_FLAG. should work if : o all the ipipe_domain structs are "zeroed" upon initialization (ok, in case of static or global); o ipipe_virtualize_irq(..., handler=NULL,...) is always called between possible consecutive ipipe_virtualize_irq(..., handler!=NULL, ...) calls. But, yep, this way we enforce a policy for ipipe_virtualize_irq() so that the use of IPIPE_EXCLUSIVE_FLAG is likely better. esp. for the nucleus where every rthal_irq_request() has a conforming rthal_irq_release() call. Why do I want to eliminate it? o any function that make use of critical_enter/exit() must not be called when a lock (e.g. "nklock") is held. ok, xnintr_attach() was always the case and it's used properly e.g. in native::rt_intr_create(). but xnintr_detach() is now the case too (heh.. I have overlooked it in the first instance) because xnintr_shirq_detach() is synchronized wrt xnintr_shirq_attach() using critical_enter/exit(). This is only because xnintr_shirq_attach() make use of rthal_irq_request() ---> rthal_critical_enter/exit(), hence the nklock can't be used in xnintr_shirq_*. Yep, another approach is to enforce the policy that both xnintr_attach() and xnintr_detach() must never be called when the nklock is held (say, rt_intr_delete() should be rewritten)... but I guess, the better solution is to eliminate the critical_enter/exit() from rthal_irq_request(). o there is no need to use cirtical_enter/exit() in xnintr_shirq_* anymore, the nklock can be used instead. this would solve one synch. problem in xnintr_* code, though there is yet another and more complex one I'm banging my head on now :o) -- Best regards, Dmitry Adamushko ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core