Re: [Xenomai-core] [PATCH] provide ipipe_traceing via nucleus interface

2006-06-23 Thread Philippe Gerum
On Thu, 2006-06-22 at 12:54 +0200, Jan Kiszka wrote:
> Hi,
> 
> having to load xeno_timerbench and to open its device just for
> triggering the I-pipe tracer was not a smart decision of mine. This
> patch makes is more comfortable to call the tracer from user space.

> Index: include/asm-generic/syscall.h
> ===
> --- include/asm-generic/syscall.h (Revision 1252)
> +++ include/asm-generic/syscall.h (Arbeitskopie)
> @@ -30,6 +30,13 @@
>  #define __xn_sys_info   4/* xnshadow_get_info(muxid,&info) */
>  #define __xn_sys_arch   5/* r = xnarch_local_syscall(args) */
>  
> +#define __xn_sys_trace_begin6   /* ipipe_trace_begin(v) */
> +#define __xn_sys_trace_end  7   /* ipipe_trace_end(v) */
> +#define __xn_sys_trace_freeze   8   /* ipipe_trace_freeze(v) */
> +#define __xn_sys_trace_specl9   /*
ipipe_trace_special(special_id, v) */
> +#define __xn_sys_trace_mreset   10  /* ipipe_trace_max_reset() */
> +#define __xn_sys_trace_freset   11  /* ipipe_trace_frozen_reset() */
> +

Ok for providing a tracer syscall from the nucleus table, but let's
not pollute the namespace uselessly. We could just have a single
tracer entry point, using the first arg as a function code for begin,
end, freeze etc. Given that those ops are not on the fast path, there
is nothing to gain in having them as separate calls. See __xn_sys_arch
for ARM.

>  
> Index: include/nucleus/ipipe_trace.h
> ===

This file should go to include/asm-generic/ since it depends on the
underlying real-time enabler (i.e. I-pipe). This way, there would be
no need to check for __XENO_SIM__.

> --- include/nucleus/ipipe_trace.h (Revision 0)
> +++ include/nucleus/ipipe_trace.h (Revision 0)
> @@ -0,0 +1,82 @@
> +/*
> Index: src/testsuite/latency/latency.c
> ===
> --- src/testsuite/latency/latency.c   (Revision 1252)
> +++ src/testsuite/latency/latency.c   (Arbeitskopie)
> @@ -12,6 +12,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  RT_TASK latency_task, display_task;
>  
> @@ -130,8 +131,7 @@ void latency (void *cookie)
>  
>  if (freeze_max && (dt > gmaxjitter) && !(finished ||
warmup))
>  {
> -rt_dev_ioctl(benchdev, RTBNCH_RTIOC_REFREEZE_TRACE,
> - rt_timer_tsc2ns(dt));
> +ipipe_trace_refreeze(rt_timer_tsc2ns(dt));

I don't like the idea of spreading ipipe-something symbols and
dependancies all over the entire source code including the generic one,
especially when considering that at some point, we are going to have
preempt-rt as the other possible real-time enabler, like
Adeos is already used now. We should use something more generic.
"tracer*" would be ok, I guess.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] normal pthreads broken

2006-06-23 Thread Philippe Gerum
On Thu, 2006-06-22 at 18:12 +0200, Jan Kiszka wrote:
> Hi Gilles,
> 
> I think some regression slipped into the rt-pthread lib. This example no
> longer works on my box (thread is not executed):

The issue is in src/skins/posix/thread.c. The trampoline does not even
attempt to fire the thread body for policy == SCHED_OTHER. Will fix.
This said, I still wonder why cyclic is affected, since it should only
create SCHED_FIFO threads, but tracing it a bit, the issue is indeed the
same one.

> 
> #include 
> #include 
> #include 
> 
> void *thread(void *arg)
> {
> printf("thread\n");
> return 0;
> }
> 
> main()
> {
> pthread_t thr;
> 
> mlockall(MCL_CURRENT|MCL_FUTURE);
> 
> printf("create = %d\n",
>pthread_create(&thr, NULL, thread, NULL));
> pause();
> }
> 
> 
> This also explains why the cyclic test is broken.
> 
> Jan
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] normal pthreads broken

2006-06-23 Thread Philippe Gerum
On Fri, 2006-06-23 at 10:50 +0200, Philippe Gerum wrote:
> On Thu, 2006-06-22 at 18:12 +0200, Jan Kiszka wrote:
> > Hi Gilles,
> > 
> > I think some regression slipped into the rt-pthread lib. This example no
> > longer works on my box (thread is not executed):
> 
> The issue is in src/skins/posix/thread.c. The trampoline does not even
> attempt to fire the thread body for policy == SCHED_OTHER. Will fix.
> This said, I still wonder why cyclic is affected, since it should only
> create SCHED_FIFO threads,

Oops, wrong. It first creates normal threads, then calls setschedparam
to move them to the FIFO policy. So that's ok.

>  but tracing it a bit, the issue is indeed the
> same one.
> 
> > 
> > #include 
> > #include 
> > #include 
> > 
> > void *thread(void *arg)
> > {
> > printf("thread\n");
> > return 0;
> > }
> > 
> > main()
> > {
> > pthread_t thr;
> > 
> > mlockall(MCL_CURRENT|MCL_FUTURE);
> > 
> > printf("create = %d\n",
> >pthread_create(&thr, NULL, thread, NULL));
> > pause();
> > }
> > 
> > 
> > This also explains why the cyclic test is broken.
> > 
> > Jan
> > 
> > ___
> > Xenomai-core mailing list
> > Xenomai-core@gna.org
> > https://mail.gna.org/listinfo/xenomai-core
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] RTDM driver add-on infrastructure

2006-06-23 Thread Wolfgang Grandegger

Jan Kiszka wrote:

Wolfgang Grandegger wrote:

Hello,

I'm currently implementing a RTDM real-time CAN driver, which raises the
the problem of adding the driver to the Xenomai source tree. My first
idea was to provide RTCAN as a patch for Xenomai:


So you prefer to maintain RTCAN out-of-tree on the long term? What are
the reasons?


No, when the code is stable it can go to the Xenomai repository, like 
for the Linux kernel. It clearly simplifies maintenance , at least for 
the maintainer of the driver. Nevertheless, an Add-On facility would be 
nice to have.





  $ cd xenomai
  $ patch -p1 < xenomai-rtcan-add-on.patch
  $ scripts/prepare_kernel ...
  ...
  $ 
  $ 

This does not work because of autoconf files needed to copy header files
to the installation path. Is this really necessary?
Another issue is where to put utility and test programs. Making them
without autoconf and friends works by using xeno-config. But they should
be installed with make install as well. Likely there are other issues.

Any ideas or comments on how to provide a generic RTDM driver add-on
infrastructure?


When you first talked about an "RTDM plugin" interface for Xenomai, I
got the idea of dragging external sources into the Xenomai kernel build
process. I haven't thought about this technically yet, but it would
allow to keep driver source packages maintained externally while still
providing them the option to become built into the kernel.

Ok, let's think about this for a while: We would need some management
script(s) to link an external source tree into the config and build
process, the remove it again, and probably to give a list of the
currently active plugins. Should be feasible without huge magic,
shouldn't it? But does this make sense, is it desirable (to me it is
when I think about making RTnet build cleanly against 2.6.17 yesterday...)?


Ala Xenomai's prepare_kernel script, which mainly adds links to the 
kernel tree and modifies some Makefiles and Kconfig (Config.in) files. 
Removing an Add-On seems not to be that important for me. This sounds 
reasonable.


Adding the RTCAN driver to Xenomai really solved a lot of build and 
installations issues without caring about the kernel version 2.4 or 2.6 :-).



This does not address your user mode utils, but I think they should
rather be distributed independently (something I have in mind for RTnet
as well once we ever switch from /dev/rtnet to some RTDM socket/device
for config work). The required rtdm/rtcan.h should be merged into
Xenomai, yet unmerged revisions could alternatively come with the
rtcan-utils packages to make it build (autoconf if your friend to detect
the available revision).


I think the Add-on package should provide both, the driver _and_ 
user-space utility and test programs.


Wolfgang.




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] normal pthreads broken

2006-06-23 Thread Jan Kiszka
Philippe Gerum wrote:
> On Thu, 2006-06-22 at 18:12 +0200, Jan Kiszka wrote:
>> Hi Gilles,
>>
>> I think some regression slipped into the rt-pthread lib. This example no
>> longer works on my box (thread is not executed):
> 
> The issue is in src/skins/posix/thread.c. The trampoline does not even
> attempt to fire the thread body for policy == SCHED_OTHER. Will fix.
> This said, I still wonder why cyclic is affected, since it should only
> create SCHED_FIFO threads, but tracing it a bit, the issue is indeed the
> same one.

It creates normal threads and then calls ppthread_setschedparam.

> 
>> #include 
>> #include 
>> #include 
>>
>> void *thread(void *arg)
>> {
>> printf("thread\n");
>> return 0;
>> }
>>
>> main()
>> {
>> pthread_t thr;
>>
>> mlockall(MCL_CURRENT|MCL_FUTURE);
>>
>> printf("create = %d\n",
>>pthread_create(&thr, NULL, thread, NULL));
>> pause();
>> }
>>
>>
>> This also explains why the cyclic test is broken.
>>
>> Jan
>>
>> ___
>> Xenomai-core mailing list
>> Xenomai-core@gna.org
>> https://mail.gna.org/listinfo/xenomai-core




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] provide ipipe_traceing via nucleus interface

2006-06-23 Thread Jan Kiszka
Philippe Gerum wrote:
> On Thu, 2006-06-22 at 12:54 +0200, Jan Kiszka wrote:
>> Hi,
>>
>> having to load xeno_timerbench and to open its device just for
>> triggering the I-pipe tracer was not a smart decision of mine. This
>> patch makes is more comfortable to call the tracer from user space.
> 
>> Index: include/asm-generic/syscall.h
>> ===
>> --- include/asm-generic/syscall.h(Revision 1252)
>> +++ include/asm-generic/syscall.h(Arbeitskopie)
>> @@ -30,6 +30,13 @@
>>  #define __xn_sys_info   4   /* xnshadow_get_info(muxid,&info) */
>>  #define __xn_sys_arch   5   /* r = xnarch_local_syscall(args) */
>>  
>> +#define __xn_sys_trace_begin6   /* ipipe_trace_begin(v) */
>> +#define __xn_sys_trace_end  7   /* ipipe_trace_end(v) */
>> +#define __xn_sys_trace_freeze   8   /* ipipe_trace_freeze(v) */
>> +#define __xn_sys_trace_specl9   /*
> ipipe_trace_special(special_id, v) */
>> +#define __xn_sys_trace_mreset   10  /* ipipe_trace_max_reset() */
>> +#define __xn_sys_trace_freset   11  /* ipipe_trace_frozen_reset() */
>> +
> 
> Ok for providing a tracer syscall from the nucleus table, but let's
> not pollute the namespace uselessly. We could just have a single
> tracer entry point, using the first arg as a function code for begin,
> end, freeze etc. Given that those ops are not on the fast path, there
> is nothing to gain in having them as separate calls. See __xn_sys_arch
> for ARM.

Ok, will change.

> 
>>  
>> Index: include/nucleus/ipipe_trace.h
>> ===
> 
> This file should go to include/asm-generic/ since it depends on the
> underlying real-time enabler (i.e. I-pipe). This way, there would be
> no need to check for __XENO_SIM__.

Ok, but how is the user supposed to include the API then? Or can we drag
it in implicitly somehow? That would be even nicer I think.

> 
>> --- include/nucleus/ipipe_trace.h(Revision 0)
>> +++ include/nucleus/ipipe_trace.h(Revision 0)
>> @@ -0,0 +1,82 @@
>> +/*
>> Index: src/testsuite/latency/latency.c
>> ===
>> --- src/testsuite/latency/latency.c  (Revision 1252)
>> +++ src/testsuite/latency/latency.c  (Arbeitskopie)
>> @@ -12,6 +12,7 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
>>  
>>  RT_TASK latency_task, display_task;
>>  
>> @@ -130,8 +131,7 @@ void latency (void *cookie)
>>  
>>  if (freeze_max && (dt > gmaxjitter) && !(finished ||
> warmup))
>>  {
>> -rt_dev_ioctl(benchdev, RTBNCH_RTIOC_REFREEZE_TRACE,
>> - rt_timer_tsc2ns(dt));
>> +ipipe_trace_refreeze(rt_timer_tsc2ns(dt));
> 
> I don't like the idea of spreading ipipe-something symbols and
> dependancies all over the entire source code including the generic one,
> especially when considering that at some point, we are going to have
> preempt-rt as the other possible real-time enabler, like
> Adeos is already used now. We should use something more generic.
> "tracer*" would be ok, I guess.
> 

I'm going to rescan Ingo's API to define a common interface where
feasible (their own user space API seems to hide behind gettimeofday).
BTW, looks like that tracer will not make into mainline soon - I noticed
no further pushing recently.

Jan

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Improved xeno-test: Ready for checkin

2006-06-23 Thread Philippe Gerum
On Thu, 2006-06-22 at 23:37 +0200, Niklaus Giger wrote:
> Hi
> 
> Here is my patch for improved versions of xeno-info/load/config/test
> as well as a Ruby test script for the maintainers.
> 
> The modified scripts pass the test for most options to xeno-test. The only 
> exception is "-v" for verbose. As I have no clue how/what this option should 
> do I left it as is.

The getopt list lacked the "v" option letter. Fixed, thanks.

> 
> The modifications are:
> - busybox supported
> - The load jobs (dd) are killed correctly
> - two new lines for my buildbot "xeno-test started" and "xeno-test finished"
> - the options -m / -L are no more silent, but echo also on the standard out.
>   If the old behaviour is preferred, it would be simple to restore it. But
>   the actual behaviour is easier to debug and fits better into the buildbot
>   logs.

Logs look ok.

> As rc3 is still not out, I do not see any reason not to check in this patch.
> But I would like to see this happen soon, as I will not be able to fix errors 
> after next Wednesday.
> 

Applied, thanks.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] provide ipipe_traceing via nucleus interface

2006-06-23 Thread Philippe Gerum
On Fri, 2006-06-23 at 11:27 +0200, Jan Kiszka wrote:
> >> Index: include/nucleus/ipipe_trace.h
> >> ===
> > 
> > This file should go to include/asm-generic/ since it depends on the
> > underlying real-time enabler (i.e. I-pipe). This way, there would be
> > no need to check for __XENO_SIM__.
> 
> Ok, but how is the user supposed to include the API then? Or can we drag
> it in implicitly somehow? That would be even nicer I think.
> 

asm-generic/trace.h would be dragged in by include/asm-/hal.h, so
that we get the proper double dependancy on both the architecture if it
does provide the tracing feature, and also on the HAL which in turn
requires the I-pipe.

> > 
> >> --- include/nucleus/ipipe_trace.h  (Revision 0)
> >> +++ include/nucleus/ipipe_trace.h  (Revision 0)
> >> @@ -0,0 +1,82 @@
> >> +/*
> >> Index: src/testsuite/latency/latency.c
> >> ===
> >> --- src/testsuite/latency/latency.c(Revision 1252)
> >> +++ src/testsuite/latency/latency.c(Arbeitskopie)
> >> @@ -12,6 +12,7 @@
> >>  #include 
> >>  #include 
> >>  #include 
> >> +#include 
> >>  
> >>  RT_TASK latency_task, display_task;
> >>  
> >> @@ -130,8 +131,7 @@ void latency (void *cookie)
> >>  
> >>  if (freeze_max && (dt > gmaxjitter) && !(finished ||
> > warmup))
> >>  {
> >> -rt_dev_ioctl(benchdev, RTBNCH_RTIOC_REFREEZE_TRACE,
> >> - rt_timer_tsc2ns(dt));
> >> +ipipe_trace_refreeze(rt_timer_tsc2ns(dt));
> > 
> > I don't like the idea of spreading ipipe-something symbols and
> > dependancies all over the entire source code including the generic one,
> > especially when considering that at some point, we are going to have
> > preempt-rt as the other possible real-time enabler, like
> > Adeos is already used now. We should use something more generic.
> > "tracer*" would be ok, I guess.
> > 
> 
> I'm going to rescan Ingo's API to define a common interface where
> feasible (their own user space API seems to hide behind gettimeofday).

Thanks. We should definitely consider the underlying the real-time
enabling technology Xenomai uses to run skins over as an interchangeable
component.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Condition variable

2006-06-23 Thread ROSSIER Daniel








 

Hi all,

 

Is there a particular reason to enforce the queuing policy
to the "highest priority" thread for a variable condition?

I would expect that we can also specify a FIFO queue at the
creation mode of such an object, as it is the case

of other synchronization objects.

 

Thanks for your help.

 

Daniel

 






___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Condition variable

2006-06-23 Thread Philippe Gerum
On Fri, 2006-06-23 at 11:48 +0200, ROSSIER Daniel wrote:
>  
> 
> Hi all,
> 
>  
> 
> Is there a particular reason to enforce the queuing policy to the
> "highest priority" thread for a variable condition?
> 

The reason was that condvar support for the native skin should closely
follow the POSIX behaviour, which uses the scheduling policy to define
the pending order of threads blocked on mutexes/condvars.

> I would expect that we can also specify a FIFO queue at the creation
> mode of such an object, as it is the case
> of other synchronization objects.

The problem I see is that a condvar must be associated with a mutex, and
since a mutex must be pended on by priority and exhibits priority
inheritance by construction, I would find rather strange to have part of
the synchronization combo behave in fifo mode, whilst the other part
enforces prio mode.

> 
>  
> 
> Thanks for your help.
> 
>  
> 
> Daniel
> 
>  
> 
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] provide ipipe_traceing via nucleus interface

2006-06-23 Thread Jan Kiszka
Philippe Gerum wrote:
> On Fri, 2006-06-23 at 11:27 +0200, Jan Kiszka wrote:
 Index: include/nucleus/ipipe_trace.h
 ===
>>> This file should go to include/asm-generic/ since it depends on the
>>> underlying real-time enabler (i.e. I-pipe). This way, there would be
>>> no need to check for __XENO_SIM__.
>> Ok, but how is the user supposed to include the API then? Or can we drag
>> it in implicitly somehow? That would be even nicer I think.
>>
> 
> asm-generic/trace.h would be dragged in by include/asm-/hal.h, so
> that we get the proper double dependancy on both the architecture if it
> does provide the tracing feature, and also on the HAL which in turn
> requires the I-pipe.

Err, asm-/hal.h are pure kernel space headers, aren't they? But we
need user space support as well.

> 
 --- include/nucleus/ipipe_trace.h  (Revision 0)
 +++ include/nucleus/ipipe_trace.h  (Revision 0)
 @@ -0,0 +1,82 @@
 +/*
 Index: src/testsuite/latency/latency.c
 ===
 --- src/testsuite/latency/latency.c(Revision 1252)
 +++ src/testsuite/latency/latency.c(Arbeitskopie)
 @@ -12,6 +12,7 @@
  #include 
  #include 
  #include 
 +#include 
  
  RT_TASK latency_task, display_task;
  
 @@ -130,8 +131,7 @@ void latency (void *cookie)
  
  if (freeze_max && (dt > gmaxjitter) && !(finished ||
>>> warmup))
  {
 -rt_dev_ioctl(benchdev, RTBNCH_RTIOC_REFREEZE_TRACE,
 - rt_timer_tsc2ns(dt));
 +ipipe_trace_refreeze(rt_timer_tsc2ns(dt));
>>> I don't like the idea of spreading ipipe-something symbols and
>>> dependancies all over the entire source code including the generic one,
>>> especially when considering that at some point, we are going to have
>>> preempt-rt as the other possible real-time enabler, like
>>> Adeos is already used now. We should use something more generic.
>>> "tracer*" would be ok, I guess.
>>>
>> I'm going to rescan Ingo's API to define a common interface where
>> feasible (their own user space API seems to hide behind gettimeofday).
> 
> Thanks. We should definitely consider the underlying the real-time
> enabling technology Xenomai uses to run skins over as an interchangeable
> component.
> 

Unfortunately, Ingo's semantics are still quite different, partly
incompatible (and unhandy as well - you can only trace one event, not
the worst of a series like with the ipipe tracer). So we may need
special user code to deal with these differences (like in cyclictest).

The most critical point is that Ingo's tracer nukes any timing
guarantees when it fires. In contrast, the ipipe variant is RT-safe.

Anyway, let's try to go for these names and semantics (implementation:
ipipe / Ingo):

xntrace_max_begin(unsigned long v):
Mark the worst-case path beginning together with an arbitrary
ulong. This path is separated from the user path below.

ipipe_trace_begin(v); / -

xntrace_max_end(unsigned long v):
Mark the worst-case path end together with an arbitrary ulong.
The path with maximum length can be obtained from the logs.

ipipe_trace_end(v); / -

xntrace_user_start(void):
Start the user trace path.

ipipe_trace_frozen_reset(); / user_trace_start();

xntrace_user_stop(unsigned long v):
Record an arbitrary ulong and stop the user trace path. The
result is kept until the next invocation of
xntrace_user_start() or xntrace_user_freeze(..., 0).

ipipe_trace_freeze(v); /
trace_special(v, 0, 0); user_trace_stop();

xntrace_user_freeze(unsigned long v, int once):
Record an arbitrary ulong and freeze the user trace path. If
once is 0, the tracer keeps running after the freeze so that
always the latest result can be obtained. Otherwise, this call
is equivalent to xntrace_user_stop(v).

if (once)
xntrace_user_stop(v);
else
ipipe_trace_frozen_reset(); ipipe_trace_freeze(v) / -

xntrace_special(unsigned char id, unsigned long v):
Record an arbitrary uchar + ulong.

ipipe_trace_special(id, v); / trace_special(v, 0, id);

xntrace_special_u64(unsigned char id, unsigned long long v):
Record an arbitrary uchar + u64.

ipipe_trace_special(id, v >> 32);
ipipe_trace_special(id, v & 0x); /
trace_special_u64(v, id);

I choose xntrace as prefix because Xenomai is used here to provide an
abstraction layer. Do you agree with the naming? Then this API should be
made available symmetrically in both kernel and use space. Still, the
question is via which header file...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenom

Re: [Xenomai-core] [BUG] oops on skincall without nucleus being loaded

2006-06-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > Jan Kiszka wrote:
 > > Hi,
 > > 
 > > wondering why suddenly things crash on invoking the latency test, I
 > > realised that I turned the nucleus into a module which was not yet
 > > loaded. Here is the oops in this case:
 > 
 > Correction: the nucleus was still compiled in, the native skin was missing.

After a more thorough inspection, the problem was the xnshadow_p() call
at the very end of do_losyscall_event. When no pod is loaded this call
oopses.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] More testcases for vxworks skin task handling?

2006-06-23 Thread Gilles Chanteperdrix
Niklaus Giger wrote:
 > Hi Gilles
 > 
 > I did some more testing, about how the vxWorks skins handles 
 > taskSpawn/taskInit and taskName.
 > 
 > I did not discover any differences between running it on my board under 
 > vxworks and under using the Xenomai simulator on my PowerBook.
 > 
 > There are probably some tests that you consider redundant, so please feel 
 > free 
 > to minimize is.
 > 
 > It is very nice to have an infrastructure where I can easily add testcases 
 > and 
 > verify that everything is okay!

Patch applied, thanks.


-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] oops on skincall without nucleus being loaded

2006-06-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 > Jan Kiszka wrote:
 > > Hi,
 > > 
 > > wondering why suddenly things crash on invoking the latency test, I
 > > realised that I turned the nucleus into a module which was not yet
 > > loaded. Here is the oops in this case:
 > 
 > Correction: the nucleus was still compiled in, the native skin was missing.

After a few investigation, the problem appears to be that the nucleus
assume that user-space skins will issue a bind syscall before using a
skin, and that the user-space RTDM library does not exit if binding
fails. So, there are two ways we can fix this problem:
- either we make the nucleus paranoid and have it handle gracefully
  syscalls to non loaded tables;
- or we make the user-space RTDM library behave like other skins and
  exit if the interface is not bound.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] oops on skincall without nucleus being loaded

2006-06-23 Thread Philippe Gerum
On Fri, 2006-06-23 at 15:41 +0200, Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>  > Jan Kiszka wrote:
>  > > Hi,
>  > > 
>  > > wondering why suddenly things crash on invoking the latency test, I
>  > > realised that I turned the nucleus into a module which was not yet
>  > > loaded. Here is the oops in this case:
>  > 
>  > Correction: the nucleus was still compiled in, the native skin was missing.
> 
> After a few investigation, the problem appears to be that the nucleus
> assume that user-space skins will issue a bind syscall before using a
> skin, and that the user-space RTDM library does not exit if binding
> fails.

I don't get it: the muxid should be invalid then(?)

>  So, there are two ways we can fix this problem:
> - either we make the nucleus paranoid and have it handle gracefully
>   syscalls to non loaded tables;
> - or we make the user-space RTDM library behave like other skins and
>   exit if the interface is not bound.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] oops on skincall without nucleus being loaded

2006-06-23 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 > On Fri, 2006-06-23 at 15:41 +0200, Gilles Chanteperdrix wrote:
 > > Jan Kiszka wrote:
 > >  > Jan Kiszka wrote:
 > >  > > Hi,
 > >  > > 
 > >  > > wondering why suddenly things crash on invoking the latency test, I
 > >  > > realised that I turned the nucleus into a module which was not yet
 > >  > > loaded. Here is the oops in this case:
 > >  > 
 > >  > Correction: the nucleus was still compiled in, the native skin was 
 > > missing.
 > > 
 > > After a few investigation, the problem appears to be that the nucleus
 > > assume that user-space skins will issue a bind syscall before using a
 > > skin, and that the user-space RTDM library does not exit if binding
 > > fails.
 > 
 > I don't get it: the muxid should be invalid then(?)

Yes the muxid is -1 and everything works fine, it was just a
misinterpretation. When issuing syscalls with a fixed muxid whereas
there is no interface corresponding to this muxid, the nucleus crashes,
but it is acceptable, user-space interfaces should issue an
__xn_sys_bind syscall first.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [BUG] oops on skincall without nucleus being loaded

2006-06-23 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 > misinterpretation. When issuing syscalls with a fixed muxid whereas
 > there is no interface corresponding to this muxid, the nucleus crashes,
 > but it is acceptable, user-space interfaces should issue an
 > __xn_sys_bind syscall first.

This is not even possible, since the invalid syscall go through
do_hisyscall_event first and get handled there.

-- 


Gilles Chanteperdrix.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core