Re: [Xenomai-help] 2.6 kernel module with math functions

2005-11-25 Thread Cedric Herreman
OK,I made an extra math module, copying some of the source code from newlib for the functions i needed. It works.Another question : if i create an application (in stead of kernel module) that starts a real time thread. Can i then use math functions inside the real time running part ?In the latency example, the sqrt function is used for displaying the results of the latency test. This is outside the real time task. Is it possible to use this call in the real time function ? Or any other library function (that is not performing system calls) ?Cedric.Gilles Chanteperdrix [EMAIL PROTECTED] wrote: Cedric Herreman wrote:  Hello, I am porting a 2.4 RTAI kernel module to Xenomai 2.0 kernel 2.6. I used some basic math functions in the
 original module. This is posing problems for me now.  In the module source i include . I add -I/usr/include to the compiler flags and also "-ffast-math -mhard-float".  If i compile this, i get warnings about double definitions of "__attribute_pure__" and "__attribute_used__".  If i insert the kernel module, i get an error message :  "Xenomai: Invalid use of FPU in Xenomai context at " + probably the address of the instruction where the math function is called.  Can anyone give me a hint ? Thanks.You can only use floating point operations from real-time threadscontexts, not from module initialization and finalization routines, andyou have to signal Xenomai, when creating kernel space real-timethreads, that the thread will be allowed to use FPU. For the RTAI skin,this is what the rt_task_init function 6th argument is for.There is currently no math library
 module in Xenomai. So, the answer isthat you have to avoid math functions, or make a xeno_math module, theway it is done in RTAI, i.e. using a math library such as the one madeby Sun and used by FreeBSD, or one among the various libcs available.We once discussed this with Philippe, and a good candidate seemed to benewlib at that time:http://sourceware.org/newlib/Looking at newlib sources, it seems that some of its contents come fromthe Sun library too.--  Gilles Chanteperdrix.
		 Yahoo! Music Unlimited - Access over 1 million songs. Try it free.___
Xenomai-help mailing list
Xenomai-help@gna.org
https://mail.gna.org/listinfo/xenomai-help


[Xenomai-help] Interrupt processing crashes in semTake

2005-11-25 Thread Hans-J. Ude
I'm porting an vxWorks application to Xenomai at the moment. When it comes
to interrupt handling, there is nothing in the vx skin to handle that (did i
overlook something?). First I've tried to handle that down in the xenomai
layer but problems occured and someone advised me to use a native interrupt
task using rt_intr_wait. I did so but the program segfaults after a minute
or so. I've put a sigsegv handler with a stack backtrace function into the
code. That shows the crash happens in the internals of semTake. Is it
problematic to make skin calls (semGive in my case) from inside the native
irq handler? Is there a bug somewhere in UVM or the test program or settings
or in other words: how are interrups handled properly from the vxWorks skin?

I've tracked the problem down to a test program which is appended to this
mail. I set up the irq number by #define to the irq of my network card. Then
made traffic by copying a large amount of files to the target system. After
about 3 interrupts the crash happens most times but this value ranges
from about 1000 to 5.

No clue what could be wrong. I'm using kernel 2.6.13 and xenomai 2.0.1 with
the included ipipe patch on a PIImmx running at 266 Mhz.

The backtrace function is the only debugging function i have since remote
debugging with gdb/gdbserver is still not working properly, but i'm gonna
describe that in detail in anorter post. Here comes the program:

TIA,
Hans

P.S.: Why is root_thread_exit() never called?

/** TOP OF FILE */
/* irqtest.c */

#include native/task.h
#include native/intr.h
#include vxworks/vxworks.h
#include execinfo.h


// Set this value to the IRQ number you want to monitor
// The network card is a good candidate here
#define TEST_IRQ  9

#define VX_STACKSIZE 8192
#define VX_PRIO_HIGH 10
#define VX_PRIO_MID  100
#define VX_PRIO_LOW  150

void T_vxMain();
void T_irqClient();
void T_irqInfo();

void print_trace (void);
void sigsegv_handler(int sig);

int  semCount;
SEM_ID semIrq;

#define RCC_IRQ_STKSIZE  0 // default stacksize
#define RCC_IRQ_TASKPRIO 99 // highest RT priority
#define RCC_IRQ_TASKMODE 0 // no flags

int  irq_start(int irq);
void irq_stop();
void irq_server(void *cookie);

RT_INTR intr_obj;
RT_TASK intr_task;

/** UVM section */
//
int root_thread_init()
{
 signal(SIGSEGV,sigsegv_handler);

 taskSpawn (T_vxMain, VX_PRIO_LOW, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_vxMain, 0,0,0,0,0,0,0,0,0,0);
 return 0;
}

void root_thread_exit()
{
 printf(root_thread_exit() called\n);
 irq_stop();
}

//
void T_vxMain()
{
 semIrq = semBCreate(SEM_Q_PRIORITY, SEM_EMPTY);
// sysClkRateSet(1000); // 1 ms

 taskSpawn (T_irqClient, VX_PRIO_HIGH, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqClient, 0,0,0,0,0,0,0,0,0,0);

 taskDelay(500);
 int err = irq_start(TEST_IRQ);
 if (err)
 {
  irq_stop();
  exit(err);
 }

 taskSpawn (T_irqInfo, VX_PRIO_MID, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqInfo, 0,0,0,0,0,0,0,0,0,0);
 taskSuspend(0);
}

//
void T_irqClient()
{
 while(1)
 {
  semTake(semIrq, WAIT_FOREVER);
  ++semCount;
 }
}

//
void T_irqInfo()
{
 RT_INTR_INFO info;
 int runs = 0;

 while(1)
 {
  rt_intr_inquire(intr_obj, info);
  printf (%d runs, IRQ count: %ld, semCount: %d\n,
 ++runs, info.hits, semCount);
  taskDelay(1000);
 }
}

/* Debugging section /
//
/* Obtain a backtrace and print it to stdout. */
void print_trace (void)
{
   void *array[10];
   size_t size;
   char **strings;
   size_t i;

   size = backtrace (array, 10);
   strings = backtrace_symbols (array, size);
   printf (Obtained %zd stack frames.\n, size);

   for (i = 0; i  size; i++)
  printf (%s\n, strings[i]);

   free (strings);
}

//
void sigsegv_handler(int sig)
{
 print_trace ();
 printf (\nBad incident happened in Task: %s\n,
taskName(taskIdSelf()));
  irq_stop();
 signal(SIGSEGV,SIG_DFL);
}

/* Interrupt section /
//
int irq_start(int irq)
{
 int err = rt_intr_create(intr_obj, irq, I_PROPAGATE);

 if (! err)
 {
  err = rt_task_create(intr_task, T_irqServer, RCC_IRQ_STKSIZE,
   RCC_IRQ_TASKPRIO, RCC_IRQ_TASKMODE);
  if (! err)
  {
   err = rt_task_start ( intr_task, irq_server, NULL);
   err = rt_intr_enable (intr_obj);
  }
  else
  {
   printf (Error rt_task_start() = %d\n,err);
   return err;
  }
 }
 else
 {
  printf (Error rt_intr_create(%d) = 

Re: [Xenomai-help] Interrupt processing crashes in semTake

2005-11-25 Thread Philippe Gerum

Hans-J. Ude wrote:

I'm porting an vxWorks application to Xenomai at the moment. When it comes
to interrupt handling, there is nothing in the vx skin to handle that (did i
overlook something?). First I've tried to handle that down in the xenomai
layer but problems occured and someone advised me to use a native interrupt
task using rt_intr_wait. I did so but the program segfaults after a minute
or so. I've put a sigsegv handler with a stack backtrace function into the
code. That shows the crash happens in the internals of semTake. Is it
problematic to make skin calls (semGive in my case) from inside the native
irq handler?


Yes, it's indeed a problem. When calling semTake from the in-kernel 
VxWorks skin, the invoked code expects a VxWorks task to be current in 
order to put it to sleep, but in your case, it's a native skin task. 
Since both TCBs have obviously different memory layouts, the segfault is 
inevitable. The same goes when calling semGive from a native task, since 
the latter code needs to fiddle with the caller's internals. That's a 
limitation of possible skin interactions.


In the UVM case, the situation is even worse; your application is 
sandboxed in a Linux process with local copies of the VxWorks skin and 
nucleus. The VxWorks pseudo-threads created in the sandboxed environment 
are actually supported by UVM skin threads in kernel-space, which are 
not compatible with native skin threads either.


Well, this is going to be a recurring issue, so the only way out is to 
extend the UVM module in order to export an interrupt API, the way the 
POSIX or native skin already do.


 Is there a bug somewhere in UVM or the test program or settings

or in other words: how are interrups handled properly from the vxWorks skin?

I've tracked the problem down to a test program which is appended to this
mail. I set up the irq number by #define to the irq of my network card. Then
made traffic by copying a large amount of files to the target system. After
about 3 interrupts the crash happens most times but this value ranges
from about 1000 to 5.

No clue what could be wrong. I'm using kernel 2.6.13 and xenomai 2.0.1 with
the included ipipe patch on a PIImmx running at 266 Mhz.

The backtrace function is the only debugging function i have since remote
debugging with gdb/gdbserver is still not working properly, but i'm gonna
describe that in detail in anorter post. Here comes the program:

TIA,
Hans

P.S.: Why is root_thread_exit() never called?

/** TOP OF FILE */
/* irqtest.c */

#include native/task.h
#include native/intr.h
#include vxworks/vxworks.h
#include execinfo.h


// Set this value to the IRQ number you want to monitor
// The network card is a good candidate here
#define TEST_IRQ  9

#define VX_STACKSIZE 8192
#define VX_PRIO_HIGH 10
#define VX_PRIO_MID  100
#define VX_PRIO_LOW  150

void T_vxMain();
void T_irqClient();
void T_irqInfo();

void print_trace (void);
void sigsegv_handler(int sig);

int  semCount;
SEM_ID semIrq;

#define RCC_IRQ_STKSIZE  0 // default stacksize
#define RCC_IRQ_TASKPRIO 99 // highest RT priority
#define RCC_IRQ_TASKMODE 0 // no flags

int  irq_start(int irq);
void irq_stop();
void irq_server(void *cookie);

RT_INTR intr_obj;
RT_TASK intr_task;

/** UVM section */
//
int root_thread_init()
{
 signal(SIGSEGV,sigsegv_handler);

 taskSpawn (T_vxMain, VX_PRIO_LOW, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_vxMain, 0,0,0,0,0,0,0,0,0,0);
 return 0;
}

void root_thread_exit()
{
 printf(root_thread_exit() called\n);
 irq_stop();
}

//
void T_vxMain()
{
 semIrq = semBCreate(SEM_Q_PRIORITY, SEM_EMPTY);
// sysClkRateSet(1000); // 1 ms

 taskSpawn (T_irqClient, VX_PRIO_HIGH, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqClient, 0,0,0,0,0,0,0,0,0,0);

 taskDelay(500);
 int err = irq_start(TEST_IRQ);
 if (err)
 {
  irq_stop();
  exit(err);
 }

 taskSpawn (T_irqInfo, VX_PRIO_MID, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqInfo, 0,0,0,0,0,0,0,0,0,0);
 taskSuspend(0);
}

//
void T_irqClient()
{
 while(1)
 {
  semTake(semIrq, WAIT_FOREVER);
  ++semCount;
 }
}

//
void T_irqInfo()
{
 RT_INTR_INFO info;
 int runs = 0;

 while(1)
 {
  rt_intr_inquire(intr_obj, info);
  printf (%d runs, IRQ count: %ld, semCount: %d\n,
 ++runs, info.hits, semCount);
  taskDelay(1000);
 }
}

/* Debugging section /
//
/* Obtain a backtrace and print it to stdout. */
void print_trace (void)
{
   void *array[10];
   size_t size;
   char **strings;
   size_t i;

   size = backtrace (array, 10);
   

Re: [Xenomai-help] Priorities in the vxWorks skin

2005-11-25 Thread Philippe Gerum

Hans-J. Ude wrote:

When I create some tasks under the vx skin with different priorities and
then look at the /proc/xeno/sched file they are all listed with the
value 2. Shouldn't priorities be mapped to the internal priority scale?
Of course I can't expect the original vx values there but nevertheless
they shouldn't be all the same. I've created an rtai interrupt handler
task with priority 99 too. That one appeares with the value 100 in the
list.



The explanation is accessible there:
http://download.gna.org/xenomai/documentation/tags/v2.0.1/pdf/Introduction-to-UVMs.pdf

In short, in the context of the UVM, a user-space copy of the nucleus is 
running embodied into your Linux process, this is the one that enforces 
the VxWorks priority levels. To this end, it only uses three scheduling 
levels from the real nucleus in kernel space it communicates with, in 
order to schedule the application threads: one for the interrupt 
services, one for the idle thread, and the final one for the thread that 
should be running application-wise. Non-running threads are simply 
suspended from the in-kernel nucleus POV.



regards,
Hans


___
Xenomai-help mailing list
Xenomai-help@gna.org
https://mail.gna.org/listinfo/xenomai-help




--

Philippe.



Re: [Xenomai-help] Newbie question about MVM

2005-11-25 Thread Philippe Gerum

Ashri, Sarit wrote:

Hi,
I'm new to Linux and Xenomai so excuse me if my question is trivial.
Can I install and use the MVM on a Linux 2.4 workstation (Red Hat RHEL
2.1AS) 
that is not patched by Adeos?




Yes. The simulator does not require any kernel support.

I'm having trouble installing it and I don't know if its because I'm
doing something wrong, 
Or because its not supposed to work...

I've searched the internet and could not find any answers.
My target board is PPC and I use patched Linux 2.6.13.4 with it +
Xenomai 2.0.1 , cross
Compiled with the flags: ARCH=ppc CROSS_COMPILE=ppc_82xx.

Thanks a lot in advance,
Sarit.


***
This email message and any attachments thereto are intended only for use by the 
addressee(s) named above, and may contain legally privileged and/or 
confidential information. If the reader of this message is not the intended 
recipient, or the employee or agent responsible to deliver it to the intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of this communication is strictly prohibited. If you have received this 
communication in error, please immediately notify the postmaster@gna.org and 
destroy the original message.
***

___
Xenomai-help mailing list
Xenomai-help@gna.org
https://mail.gna.org/listinfo/xenomai-help




--

Philippe.



Re: [Xenomai-help] 2.6 kernel module with math functions

2005-11-25 Thread Cedric Herreman
OK,I made an extra math module, copying some of the source code from newlib for the functions i needed. It works.Another question : if i create an application (in stead of kernel module) that starts a real time thread. Can i then use math functions inside the real time running part ?In the latency example, the sqrt function is used for displaying the results of the latency test. This is outside the real time task. Is it possible to use this call in the real time function ? Or any other library function (that is not performing system calls) ?Cedric.Gilles Chanteperdrix [EMAIL PROTECTED] wrote: Cedric Herreman wrote:  Hello, I am porting a 2.4 RTAI kernel module to Xenomai 2.0 kernel 2.6. I used some basic math functions in the
 original module. This is posing problems for me now.  In the module source i include . I add -I/usr/include to the compiler flags and also "-ffast-math -mhard-float".  If i compile this, i get warnings about double definitions of "__attribute_pure__" and "__attribute_used__".  If i insert the kernel module, i get an error message :  "Xenomai: Invalid use of FPU in Xenomai context at " + probably the address of the instruction where the math function is called.  Can anyone give me a hint ? Thanks.You can only use floating point operations from real-time threadscontexts, not from module initialization and finalization routines, andyou have to signal Xenomai, when creating kernel space real-timethreads, that the thread will be allowed to use FPU. For the RTAI skin,this is what the rt_task_init function 6th argument is for.There is currently no math library
 module in Xenomai. So, the answer isthat you have to avoid math functions, or make a xeno_math module, theway it is done in RTAI, i.e. using a math library such as the one madeby Sun and used by FreeBSD, or one among the various libcs available.We once discussed this with Philippe, and a good candidate seemed to benewlib at that time:http://sourceware.org/newlib/Looking at newlib sources, it seems that some of its contents come fromthe Sun library too.--  Gilles Chanteperdrix.
		 Yahoo! Music Unlimited - Access over 1 million songs. Try it free.

Re: [Xenomai-help] 2.6 kernel module with math functions

2005-11-25 Thread Philippe Gerum

Cedric Herreman wrote:

OK,

I made an extra math module, copying some of the source code from newlib 
for the functions i needed. It works.


Another question : if i create an application (in stead of kernel 
module) that starts a real time thread. Can i then use math functions 
inside the real time running part ?


In the latency example, the sqrt function is used for displaying the 
results of the latency test.  This is outside the real time task. Is it 
possible to use this call in the real time function ? Or any other 
library function (that is not performing system calls) ?




Yes. RT threads in user-space have their own FPU context managed by Xenomai.


Cedric.

*/Gilles Chanteperdrix [EMAIL PROTECTED]/* wrote:

Cedric Herreman wrote:
  Hello,
 
  I am porting a 2.4 RTAI kernel module to Xenomai 2.0 kernel 2.6.
I used some basic math functions in the original module. This is
posing problems for me now.
 
  In the module source i include . I add -I/usr/include to the
compiler flags and also -ffast-math -mhard-float.
 
  If i compile this, i get warnings about double definitions of
__attribute_pure__ and __attribute_used__.
 
  If i insert the kernel module, i get an error message :
  Xenomai: Invalid use of FPU in Xenomai context at  + probably
the address of the instruction where the math function is called.
 
  Can anyone give me a hint ? Thanks.

You can only use floating point operations from real-time threads
contexts, not from module initialization and finalization routines, and
you have to signal Xenomai, when creating kernel space real-time
threads, that the thread will be allowed to use FPU. For the RTAI skin,
this is what the rt_task_init function 6th argument is for.

There is currently no math library module in Xenomai. So, the answer is
that you have to avoid math functions, or make a xeno_math module, the
way it is done in RTAI, i.e. using a math library such as the one made
by Sun and used by FreeBSD, or one among the various libcs available.
We once discussed this with Philippe, and a good candidate seemed to be
newlib at that time:

http://sourceware.org/newlib/

Looking at newlib sources, it seems that some of its contents come from
the Sun library too.

-- 



Gilles Chanteperdrix.



Yahoo! Music Unlimited - Access over 1 million songs. Try it free. 
http://pa.yahoo.com/*http://us.rd.yahoo.com/evt=36035/*http://music.yahoo.com/unlimited/ 






___
Xenomai-help mailing list
Xenomai-help@gna.org
https://mail.gna.org/listinfo/xenomai-help



--

Philippe.



[Xenomai-help] Interrupt processing crashes in semTake

2005-11-25 Thread Hans-J. Ude
I'm porting an vxWorks application to Xenomai at the moment. When it comes
to interrupt handling, there is nothing in the vx skin to handle that (did i
overlook something?). First I've tried to handle that down in the xenomai
layer but problems occured and someone advised me to use a native interrupt
task using rt_intr_wait. I did so but the program segfaults after a minute
or so. I've put a sigsegv handler with a stack backtrace function into the
code. That shows the crash happens in the internals of semTake. Is it
problematic to make skin calls (semGive in my case) from inside the native
irq handler? Is there a bug somewhere in UVM or the test program or settings
or in other words: how are interrups handled properly from the vxWorks skin?

I've tracked the problem down to a test program which is appended to this
mail. I set up the irq number by #define to the irq of my network card. Then
made traffic by copying a large amount of files to the target system. After
about 3 interrupts the crash happens most times but this value ranges
from about 1000 to 5.

No clue what could be wrong. I'm using kernel 2.6.13 and xenomai 2.0.1 with
the included ipipe patch on a PIImmx running at 266 Mhz.

The backtrace function is the only debugging function i have since remote
debugging with gdb/gdbserver is still not working properly, but i'm gonna
describe that in detail in anorter post. Here comes the program:

TIA,
Hans

P.S.: Why is root_thread_exit() never called?

/** TOP OF FILE */
/* irqtest.c */

#include native/task.h
#include native/intr.h
#include vxworks/vxworks.h
#include execinfo.h


// Set this value to the IRQ number you want to monitor
// The network card is a good candidate here
#define TEST_IRQ  9

#define VX_STACKSIZE 8192
#define VX_PRIO_HIGH 10
#define VX_PRIO_MID  100
#define VX_PRIO_LOW  150

void T_vxMain();
void T_irqClient();
void T_irqInfo();

void print_trace (void);
void sigsegv_handler(int sig);

int  semCount;
SEM_ID semIrq;

#define RCC_IRQ_STKSIZE  0 // default stacksize
#define RCC_IRQ_TASKPRIO 99 // highest RT priority
#define RCC_IRQ_TASKMODE 0 // no flags

int  irq_start(int irq);
void irq_stop();
void irq_server(void *cookie);

RT_INTR intr_obj;
RT_TASK intr_task;

/** UVM section */
//
int root_thread_init()
{
 signal(SIGSEGV,sigsegv_handler);

 taskSpawn (T_vxMain, VX_PRIO_LOW, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_vxMain, 0,0,0,0,0,0,0,0,0,0);
 return 0;
}

void root_thread_exit()
{
 printf(root_thread_exit() called\n);
 irq_stop();
}

//
void T_vxMain()
{
 semIrq = semBCreate(SEM_Q_PRIORITY, SEM_EMPTY);
// sysClkRateSet(1000); // 1 ms

 taskSpawn (T_irqClient, VX_PRIO_HIGH, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqClient, 0,0,0,0,0,0,0,0,0,0);

 taskDelay(500);
 int err = irq_start(TEST_IRQ);
 if (err)
 {
  irq_stop();
  exit(err);
 }

 taskSpawn (T_irqInfo, VX_PRIO_MID, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqInfo, 0,0,0,0,0,0,0,0,0,0);
 taskSuspend(0);
}

//
void T_irqClient()
{
 while(1)
 {
  semTake(semIrq, WAIT_FOREVER);
  ++semCount;
 }
}

//
void T_irqInfo()
{
 RT_INTR_INFO info;
 int runs = 0;

 while(1)
 {
  rt_intr_inquire(intr_obj, info);
  printf (%d runs, IRQ count: %ld, semCount: %d\n,
 ++runs, info.hits, semCount);
  taskDelay(1000);
 }
}

/* Debugging section /
//
/* Obtain a backtrace and print it to stdout. */
void print_trace (void)
{
   void *array[10];
   size_t size;
   char **strings;
   size_t i;

   size = backtrace (array, 10);
   strings = backtrace_symbols (array, size);
   printf (Obtained %zd stack frames.\n, size);

   for (i = 0; i  size; i++)
  printf (%s\n, strings[i]);

   free (strings);
}

//
void sigsegv_handler(int sig)
{
 print_trace ();
 printf (\nBad incident happened in Task: %s\n,
taskName(taskIdSelf()));
  irq_stop();
 signal(SIGSEGV,SIG_DFL);
}

/* Interrupt section /
//
int irq_start(int irq)
{
 int err = rt_intr_create(intr_obj, irq, I_PROPAGATE);

 if (! err)
 {
  err = rt_task_create(intr_task, T_irqServer, RCC_IRQ_STKSIZE,
   RCC_IRQ_TASKPRIO, RCC_IRQ_TASKMODE);
  if (! err)
  {
   err = rt_task_start ( intr_task, irq_server, NULL);
   err = rt_intr_enable (intr_obj);
  }
  else
  {
   printf (Error rt_task_start() = %d\n,err);
   return err;
  }
 }
 else
 {
  printf (Error rt_intr_create(%d) = 

Re: [Xenomai-help] Interrupt processing crashes in semTake

2005-11-25 Thread Philippe Gerum

Hans-J. Ude wrote:

I'm porting an vxWorks application to Xenomai at the moment. When it comes
to interrupt handling, there is nothing in the vx skin to handle that (did i
overlook something?). First I've tried to handle that down in the xenomai
layer but problems occured and someone advised me to use a native interrupt
task using rt_intr_wait. I did so but the program segfaults after a minute
or so. I've put a sigsegv handler with a stack backtrace function into the
code. That shows the crash happens in the internals of semTake. Is it
problematic to make skin calls (semGive in my case) from inside the native
irq handler?


Yes, it's indeed a problem. When calling semTake from the in-kernel 
VxWorks skin, the invoked code expects a VxWorks task to be current in 
order to put it to sleep, but in your case, it's a native skin task. 
Since both TCBs have obviously different memory layouts, the segfault is 
inevitable. The same goes when calling semGive from a native task, since 
the latter code needs to fiddle with the caller's internals. That's a 
limitation of possible skin interactions.


In the UVM case, the situation is even worse; your application is 
sandboxed in a Linux process with local copies of the VxWorks skin and 
nucleus. The VxWorks pseudo-threads created in the sandboxed environment 
are actually supported by UVM skin threads in kernel-space, which are 
not compatible with native skin threads either.


Well, this is going to be a recurring issue, so the only way out is to 
extend the UVM module in order to export an interrupt API, the way the 
POSIX or native skin already do.


 Is there a bug somewhere in UVM or the test program or settings

or in other words: how are interrups handled properly from the vxWorks skin?

I've tracked the problem down to a test program which is appended to this
mail. I set up the irq number by #define to the irq of my network card. Then
made traffic by copying a large amount of files to the target system. After
about 3 interrupts the crash happens most times but this value ranges
from about 1000 to 5.

No clue what could be wrong. I'm using kernel 2.6.13 and xenomai 2.0.1 with
the included ipipe patch on a PIImmx running at 266 Mhz.

The backtrace function is the only debugging function i have since remote
debugging with gdb/gdbserver is still not working properly, but i'm gonna
describe that in detail in anorter post. Here comes the program:

TIA,
Hans

P.S.: Why is root_thread_exit() never called?

/** TOP OF FILE */
/* irqtest.c */

#include native/task.h
#include native/intr.h
#include vxworks/vxworks.h
#include execinfo.h


// Set this value to the IRQ number you want to monitor
// The network card is a good candidate here
#define TEST_IRQ  9

#define VX_STACKSIZE 8192
#define VX_PRIO_HIGH 10
#define VX_PRIO_MID  100
#define VX_PRIO_LOW  150

void T_vxMain();
void T_irqClient();
void T_irqInfo();

void print_trace (void);
void sigsegv_handler(int sig);

int  semCount;
SEM_ID semIrq;

#define RCC_IRQ_STKSIZE  0 // default stacksize
#define RCC_IRQ_TASKPRIO 99 // highest RT priority
#define RCC_IRQ_TASKMODE 0 // no flags

int  irq_start(int irq);
void irq_stop();
void irq_server(void *cookie);

RT_INTR intr_obj;
RT_TASK intr_task;

/** UVM section */
//
int root_thread_init()
{
 signal(SIGSEGV,sigsegv_handler);

 taskSpawn (T_vxMain, VX_PRIO_LOW, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_vxMain, 0,0,0,0,0,0,0,0,0,0);
 return 0;
}

void root_thread_exit()
{
 printf(root_thread_exit() called\n);
 irq_stop();
}

//
void T_vxMain()
{
 semIrq = semBCreate(SEM_Q_PRIORITY, SEM_EMPTY);
// sysClkRateSet(1000); // 1 ms

 taskSpawn (T_irqClient, VX_PRIO_HIGH, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqClient, 0,0,0,0,0,0,0,0,0,0);

 taskDelay(500);
 int err = irq_start(TEST_IRQ);
 if (err)
 {
  irq_stop();
  exit(err);
 }

 taskSpawn (T_irqInfo, VX_PRIO_MID, VX_FP_TASK, VX_STACKSIZE,
  (FUNCPTR) T_irqInfo, 0,0,0,0,0,0,0,0,0,0);
 taskSuspend(0);
}

//
void T_irqClient()
{
 while(1)
 {
  semTake(semIrq, WAIT_FOREVER);
  ++semCount;
 }
}

//
void T_irqInfo()
{
 RT_INTR_INFO info;
 int runs = 0;

 while(1)
 {
  rt_intr_inquire(intr_obj, info);
  printf (%d runs, IRQ count: %ld, semCount: %d\n,
 ++runs, info.hits, semCount);
  taskDelay(1000);
 }
}

/* Debugging section /
//
/* Obtain a backtrace and print it to stdout. */
void print_trace (void)
{
   void *array[10];
   size_t size;
   char **strings;
   size_t i;

   size = backtrace (array, 10);
   

Re: [Xenomai-help] Interrupt processing crashes in semTake

2005-11-25 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
  Hans-J. Ude wrote:
   I'm porting an vxWorks application to Xenomai at the moment. When it comes
   to interrupt handling, there is nothing in the vx skin to handle that (did 
   i
   overlook something?). First I've tried to handle that down in the xenomai
   layer but problems occured and someone advised me to use a native interrupt
   task using rt_intr_wait. I did so but the program segfaults after a minute
   or so. I've put a sigsegv handler with a stack backtrace function into the
   code. That shows the crash happens in the internals of semTake. Is it
   problematic to make skin calls (semGive in my case) from inside the native
   irq handler?
  
  Yes, it's indeed a problem. When calling semTake from the in-kernel 
  VxWorks skin, the invoked code expects a VxWorks task to be current in 
  order to put it to sleep, but in your case, it's a native skin task. 
  Since both TCBs have obviously different memory layouts, the segfault is 
  inevitable. The same goes when calling semGive from a native task, since 
  the latter code needs to fiddle with the caller's internals. That's a 
  limitation of possible skin interactions.

Looking at the code, it seems that semTake should return an error when
called from an ISR, and semGive should work when called from an ISR, for
semaphores created with semBCreate or semCCreate, but not for those
created with semMCreate. 

But there is an issue with wind_errnoset not checking whether the value
returned by wind_current_task is not NULL, this should explain the
segfault, but will not be make semTake work from an ISR.

Could you check whether the attached patch removes the segfault ?

It should applied to the vxworks skin defs.h file; its location depends
on the Xenomai branch you are using.

-- 


Gilles Chanteperdrix.
Index: defs.h
===
--- defs.h  (revision 153)
+++ defs.h  (working copy)
@@ -55,7 +55,11 @@
 {   \
 if(!xnpod_asynch_p()  \
xnthread_test_flags(xnpod_current_thread(), IS_WIND_TASK))   \
-wind_current_task()-errorStatus = value;   \
+{   \
+wind_task_t *_cur = wind_current_task();\
+if (_cur)   \
+_cur-errorStatus = value;  \
+}   \
 } while(0)