[Xenomai-core] 16550 compile err: ‘RTDM_IRQ_ENABLE’ und eclared (first use in this function)

2006-02-28 Thread Jim Cromie

 LD  drivers/xenomai/16550A/built-in.o
 CC [M]  drivers/xenomai/16550A/16550A.o
/mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c: 
In function ‘rt_16550_interrupt’:
/mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269: 
error: ‘RTDM_IRQ_ENABLE’ undeclared (first use in this function)
/mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269: 
error: (Each undeclared identifier is reported only once
/mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269: 
error: for each function it appears in.)

make[4]: *** [drivers/xenomai/16550A/16550A.o] Error 1
make[3]: *** [drivers/xenomai/16550A] Error 2
make[2]: *** [drivers/xenomai] Error 2
make[1]: *** [drivers] Error 2
make: *** [_all] Error 2


I de-configured 16550, it built fine, so I suspect some recent change 
missed this item.


that said, I havent tried _NOENABLE, since im guessing blind.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] 16550 compile err: ?RTDM_ IRQ_ENABLE? undeclared (first use in thi s function)

2006-02-28 Thread Jan Kiszka
Jim Cromie wrote:
>  LD  drivers/xenomai/16550A/built-in.o
>  CC [M]  drivers/xenomai/16550A/16550A.o
> /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:
> In function ?rt_16550_interrupt?:
> /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269:
> error: ?RTDM_IRQ_ENABLE? undeclared (first use in this function)
> /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269:
> error: (Each undeclared identifier is reported only once
> /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269:
> error: for each function it appears in.)
> make[4]: *** [drivers/xenomai/16550A/16550A.o] Error 1
> make[3]: *** [drivers/xenomai/16550A] Error 2
> make[2]: *** [drivers/xenomai] Error 2
> make[1]: *** [drivers] Error 2
> make: *** [_all] Error 2
> 
> 
> I de-configured 16550, it built fine, so I suspect some recent change
> missed this item.
> 
> that said, I havent tried _NOENABLE, since im guessing blind.
> 

Yeah, I'm on it. Here is half of the patch I'm currently preparing:

--- ../ksrc/drivers/16550A/16550A.c (Revision 624)
+++ ../ksrc/drivers/16550A/16550A.c (Arbeitskopie)
@@ -238,7 +238,7 @@
 int rbytes = 0;
 int events = 0;
 int modem;
-int ret = RTDM_IRQ_PROPAGATE;
+int ret = RTDM_IRQ_NONE;


 ctx = rtdm_irq_get_arg(irq_context, struct rt_16550_context);
@@ -266,7 +266,7 @@
 events |= RTSER_EVENT_MODEMLO;
 }

-ret = RTDM_IRQ_ENABLE | RTDM_IRQ_HANDLED;
+ret = RTDM_IRQ_HANDLED;
 }

 if (ctx->in_nwait > 0) {


Jan




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] rt_task_wait_period() and overruns

2006-02-28 Thread Steven Seeger
I do not recall having this problem with fusion, but I'll take your word on
it. I don't have time to go back and check. :)

Purging the overrun count when rt_task_wait_period() is called may work but
not for all conditions. For example, say I am monitoring a patient's
heartbeat by taking an A/D reading every 1 ms in order to build an ECG
waveform. If I have 4 overruns, I've got a problem because I've missed
crucial data, and this is a serious problem. Of course, it isn't the RTOS's
job to create an error condition in this fashion. But on the other hand, it
woudln't be desirable to have 4 duplicate measurements in such a waveform,
either. The user could check the overrun count himself already, if desired.

The problem with purging the overrun count is that a lot of periodic threads
use counters to perform certain actions. Say my thread runs every 1 ms, so
every 500 times I want to toggle an LED to make it blink at a rate of 2 Hz.
If the overrun counter is purged, then such behavior is going to mess up the
counter. If there is a momentary loss of realtime due to a higher priorituy
thread going nuts, the light will still most likely blink at the right time.

Perhaps the best option would be to make this a task property that users can
set? Keep the current behavior by default, but purge overruns if they so
desire. The cost of this would be only one branch condition in
rt_task_wait_period().

Steven


On 2/28/06 9:53 AM, "Philippe Gerum" <[EMAIL PROTECTED]> wrote:

> Steven Seeger wrote:
> Right (except that fusion never exhibited the behaviour you described,
> though). 
> Still, there is an interesting question that remains which you indirectly
> brought 
> in, and which is the real issue to worry about: does rt_task_wait_period(), as
> it 
> is now, behave in the best interest of users who happen to use it properly?
> 
> I mean: if the application misses several deadlines because something is going
> wild in there, wouldn't the recovery procedure be easier if one knows at once
> how 
> many deadlines have been missed in a raw, without having to call the RTOS
> back. 
> IOW, do we want to purge the overrun count after the first notification and
> make 
> rt_task_wait_period return this count (e.g. ala Chorus/OS's thread pools), or
> would it be preferable to keep the things the way they are now?
> 
> Breaking the API again is also an issue, albeit we already broke it for a few
> other calls when working on v2.1 anyway.
> 
> Open question. Something like a poll, actually.


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] rt_task_wait_period() and overruns

2006-02-28 Thread Philippe Gerum

Steven Seeger wrote:

All right, all right. I surrender. *waves the white flag*

Let's just say that I saw something different from fusion/classic RTAI and
was reporting it as a possible bug incorrectly, all right?



Right (except that fusion never exhibited the behaviour you described, though). 
Still, there is an interesting question that remains which you indirectly brought 
in, and which is the real issue to worry about: does rt_task_wait_period(), as it 
is now, behave in the best interest of users who happen to use it properly?


I mean: if the application misses several deadlines because something is going 
wild in there, wouldn't the recovery procedure be easier if one knows at once how 
many deadlines have been missed in a raw, without having to call the RTOS back. 
IOW, do we want to purge the overrun count after the first notification and make 
rt_task_wait_period return this count (e.g. ala Chorus/OS's thread pools), or 
would it be preferable to keep the things the way they are now?


Breaking the API again is also an issue, albeit we already broke it for a few 
other calls when working on v2.1 anyway.


Open question. Something like a poll, actually.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] rt_task_wait_period() and overruns

2006-02-28 Thread Philippe Gerum

Philippe Gerum wrote:

Steven Seeger wrote:


All right, all right. I surrender. *waves the white flag*

Let's just say that I saw something different from fusion/classic RTAI 
and

was reporting it as a possible bug incorrectly, all right?



Right (except that fusion never exhibited the behaviour you described, 
though). Still, there is an interesting question that remains which you 
indirectly brought in, and which is the real issue to worry about: does 
rt_task_wait_period(), as it is now, behave in the best interest of 
users who happen to use it properly?


I mean: if the application misses several deadlines because something is 
going wild in there, wouldn't the recovery procedure be easier if one 
knows at once how many deadlines have been missed in a raw,


"in a row". Typical froggie English, sorry.

 without
having to call the RTOS back. IOW, do we want to purge the overrun count 
after the first notification and make rt_task_wait_period return this 
count (e.g. ala Chorus/OS's thread pools), or would it be preferable to 
keep the things the way they are now?


Breaking the API again is also an issue, albeit we already broke it for 
a few other calls when working on v2.1 anyway.


Open question. Something like a poll, actually.




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [PATCH] Shared interrupts (yet another movie :)

2006-02-28 Thread Philippe Gerum

Dmitry Adamushko wrote:


Hi there,

I have explicitly cc'ed Gilles as this patch affects the posix skin.

In the light of the recent discussions, the AUTOENA flag has been 
converted to NOAUTOENA and the IRQ line is re-enabled on return from 
xnintr_irq_handler() and shirq brothers by default.

Also XN_ISR_CHAINED -> XN_ISR_PROPAGATE.

I'm still not sutisfied with results, namely - return values of ISR. 
But, well, this is a quite separate question to the shirq support so the 
later one should not remain in pending status only because of that.


I still would like to see something along scalar values : NONE, HANDLED, 
PROPAGATE and xnintr_disable() being called in ISR to defer IRQ line 
enabling (not .ending -> PROPAGATE does it).

(*)

Currently, there is a XN_ISR_NOENABLE bit which asks the real-time layer 
to defer the IRQ line, Warning!, .ending (and not just enabling) until 
later. In common case, xnarch_end_irq() must be called by the rt_task 
that stands for a bottom half (and not just xnintr_enable() - this may 
not work on ppc).
This adds a bit of confusion and we will avoid it with (*) scheme. So 
this is a subject to change in the future.
As I pointed out in another message, the implementation for PPC is not 
yet clear at the moment. That's it...




That's great.


Ok, are there any objections as to the current patch? If no, please apply.



Applied, thanks.

CHANGELOG.patch is here 
https://mail.gna.org/public/xenomai-core/2006-02/msg00154.html


--
Best regards,
Dmitry Adamushko



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [PATCH] Shared interrupts (yet another movie :)

2006-02-28 Thread Philippe Gerum

Dmitry Adamushko wrote:


On 28/02/06, *Jan Kiszka* <[EMAIL PROTECTED] > 
wrote:


Dmitry Adamushko wrote:
 > ...
 > Ok, are there any objections as to the current patch? If no,
please apply.
 >

Oops, I found a minor problem:

make[2]: Entering directory `/usr/src/xenomai/build/doc/doxygen'
doxygen
/usr/src/xenomai/ksrc/skins/native/intr.c:523: Warning: no matching file
member found for
int rt_intr_create(RT_INTR *intr, unsigned irq, int mode)
Possible candidates:
  int rt_intr_create(RT_INTR *intr, const char *name, unsigned irq, int
mode)


...

int rt_intr_bind(rt_intr_placeholder *intr, unsigned irq, RTIME timeout)
Possible candidates:
  int rt_intr_bind(RT_INTR *intr, const char *name, RTIME timeout)
  int rt_intr_bind(RT_INTR *intr, const char *name, RTIME timeout)

Seems the doc is not yet up-to-date.


Thanks. I have overlooked some parts. This patch fixes it up (should be 
applied after the main patch).


 


Applied, thanks.




Jan



--
Best regards,
Dmitry Adamushko



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] negative values of latency/klatency

2006-02-28 Thread Philippe Gerum

Jan Kiszka wrote:

Jan Kiszka wrote:


Rudolf Marek wrote:


...
RTT|  00:00:01
RTH|klat min|klat avg|klat max| overrun|---klat best|--klat worst
RTD|4767|4929|   15191|   0|4767|   15191
RTD|4767|4808|8282|   0|4767|   15191
RTD|4767|4808|8080|   0|4767|   15191
RTD|4808|4808|7838|   0|4767|   15191
RTD|4767|4808|7272|   0|4767|   15191
RTT|  00:00:06
RTH|klat min|klat avg|klat max| overrun|---klat best|--klat worst
RTD|4808|4808|7555|   0|4767|   15191
RTD|4767|4808|7959|   0|4767|   15191
RTD|4767|4808|7393|   0|4767|   15191
RTD|4808|4808|7191|   0|4767|   15191
RTD|4767|4808|7313|   0|4767|   15191

Is this a bug or feature please? Can someone throw the light?
Good would be to print the units to the numbers too (ns).



That was likely a layout question of the latency tool's output. We could
simply dump something like "All latencies in nanoseconds" during
start-up. Would this be more helpful?




The output of latency is indeed inconsistent. Histogram and stats are
printed in microseconds, intermediate and overall latencies go out as
nanoseconds. Anyone any objections to switch to micros with 3 digits
after the decimal point? Patch is ready to be applied.


Let's roll.



== Sampling period: 150 us
== Test mode: in-kernel timer handler
== All results in microseconds
warming up...
RTT|  00:00:01  (in-kernel timer handler, 150 us period)
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat
worst
RTD|   8.935|  13.824|  39.951|   0|   8.935|
39.951
RTD|   8.998|  14.619|  36.867|   0|   8.935|
39.951
RTD|   8.576|  14.604|  37.417|   0|   8.576|
39.951
RTD|   3.018|  14.623|  40.466|   0|   3.018|
40.466
[grabbed on a low-end board]

Jan





___
Xenomai-help mailing list
Xenomai-help@gna.org
https://mail.gna.org/listinfo/xenomai-help



--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [Xenomai-help] negative values of latency/klatency

2006-02-28 Thread Jan Kiszka
Jan Kiszka wrote:
> Rudolf Marek wrote:
>> ...
>> RTT|  00:00:01
>> RTH|klat min|klat avg|klat max| overrun|---klat best|--klat worst
>> RTD|4767|4929|   15191|   0|4767|   15191
>> RTD|4767|4808|8282|   0|4767|   15191
>> RTD|4767|4808|8080|   0|4767|   15191
>> RTD|4808|4808|7838|   0|4767|   15191
>> RTD|4767|4808|7272|   0|4767|   15191
>> RTT|  00:00:06
>> RTH|klat min|klat avg|klat max| overrun|---klat best|--klat worst
>> RTD|4808|4808|7555|   0|4767|   15191
>> RTD|4767|4808|7959|   0|4767|   15191
>> RTD|4767|4808|7393|   0|4767|   15191
>> RTD|4808|4808|7191|   0|4767|   15191
>> RTD|4767|4808|7313|   0|4767|   15191
>>
>> Is this a bug or feature please? Can someone throw the light?
>> Good would be to print the units to the numbers too (ns).
>>
> 
> That was likely a layout question of the latency tool's output. We could
> simply dump something like "All latencies in nanoseconds" during
> start-up. Would this be more helpful?
> 

The output of latency is indeed inconsistent. Histogram and stats are
printed in microseconds, intermediate and overall latencies go out as
nanoseconds. Anyone any objections to switch to micros with 3 digits
after the decimal point? Patch is ready to be applied.

== Sampling period: 150 us
== Test mode: in-kernel timer handler
== All results in microseconds
warming up...
RTT|  00:00:01  (in-kernel timer handler, 150 us period)
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat
worst
RTD|   8.935|  13.824|  39.951|   0|   8.935|
39.951
RTD|   8.998|  14.619|  36.867|   0|   8.935|
39.951
RTD|   8.576|  14.604|  37.417|   0|   8.576|
39.951
RTD|   3.018|  14.623|  40.466|   0|   3.018|
40.466
[grabbed on a low-end board]

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] C++ Support for VxWorks Skin of Xenomai?

2006-02-28 Thread Jan Kiszka
Wu, John wrote:
> We at Xerox El Segundo, Calif are new user of VxWork Skin of Xenomai.
> We successfully brought up VxWork skin of xenomai 2.0.1on x86 platform
> and ran VxWork test program in C. 
>  
> The question we have is how to build and compile the VxWork skin of
> xenomai to support C++ code?
> 

Did you already try to compile some C++ program against the VxWorks
headers of Xenomai? Kernel or user space? Did you notice any problems?
Then please post the compiler output or even a demo code.

Jan




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: [PATCH] Shared interrupts (yet another movie :)

2006-02-28 Thread Dmitry Adamushko
On 28/02/06, Jan Kiszka <[EMAIL PROTECTED]> wrote:
Dmitry Adamushko wrote:> ...> Ok, are there any objections as to the current patch? If no, please apply.>Oops, I found a minor problem:make[2]: Entering directory `/usr/src/xenomai/build/doc/doxygen'
doxygen/usr/src/xenomai/ksrc/skins/native/intr.c:523: Warning: no matching filemember found forint rt_intr_create(RT_INTR *intr, unsigned irq, int mode)Possible candidates:  int rt_intr_create(RT_INTR *intr, const char *name, unsigned irq, int
mode)
... 
int rt_intr_bind(rt_intr_placeholder *intr, unsigned irq, RTIME timeout)Possible candidates:
  int rt_intr_bind(RT_INTR *intr, const char *name, RTIME timeout)  int rt_intr_bind(RT_INTR *intr, const char *name, RTIME timeout)Seems the doc is not yet up-to-date.
Thanks. I have overlooked some parts. This patch fixes it up (should be applied after the main patch).

 

Jan
-- Best regards,Dmitry Adamushko


shirq-doc-update.patch
Description: Binary data
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core