Re: [Xenomai] Queue corruption from XDDP in Xenomai 2.6.1

2012-09-18 Thread Gilles Chanteperdrix
On 09/09/2012 01:03 PM, Philippe Gerum wrote:

> On 09/06/2012 07:53 AM, Doug Brunner wrote:
>> It looks like the bug I wrote about back in June still exists in Xenomai
>> 2.6.1 (with Linux 3.2.21). I ran the same test case (an RT thread opens
>> an XDDP socket, then a Linux thread opens its end of the pipe, then the
>> RT thread stops, then with the Linux thread still holding its end of the
>> pipe another RT thread tries to open an XDDP socket with the same minor
>> number). With Xenomai queue and I-pipe debugging enabled, I got a report
>> of a corrupted queue. I've attached my config, test case, and serial
>> console log.
>>
>> So far I haven't found anything in the XDDP or underlying xnpipe_* code
>> that would suggest why this is happening. However something is
>> definitely going wrong, since xnpipe_minor_free should not be called
>> until my Linux task closes its end of the pipe, so the call by the
>> second RT thread to open the pipe should fail with -EBUSY. Any thoughts
>> on why this might be happening?
>>
> 
> Yes, please have a look at the commit log there:
> http://git.xenomai.org/?p=xenomai-2.6.git;a=commit;h=283c5f6eae1d1d7c65073e2f30fd40abdcf2c1ca
> 
> This patch should fix the issue raised by the test case you sent
> (actually, it does, it was very useful to spot the problem - thanks for
> this).


Hi Philippe, 

I am using a test case which should be about the same as Doug's, however when 
running the test
case twice, the second test fails at bind with EADDRINUSE.

The testcase (as a patch to be compiled as part of the regression suite):

diff --git a/src/testsuite/regression/posix/Makefile.am 
b/src/testsuite/regression/posix/Makefile.am
index 2107482..bd3c1cf 100644
--- a/src/testsuite/regression/posix/Makefile.am
+++ b/src/testsuite/regression/posix/Makefile.am
@@ -4,7 +4,7 @@ noinst_HEADERS = check.h
 
 CCLD = $(top_srcdir)/scripts/wrap-link.sh $(CC)
 
-tst_PROGRAMS = leaks shm mprotect nano_test
+tst_PROGRAMS = leaks shm mprotect nano_test xddp_test
 
 CPPFLAGS = $(XENO_USER_CFLAGS) \
-I$(top_srcdir)/include/posix \
diff --git a/src/testsuite/regression/posix/Makefile.in 
b/src/testsuite/regression/posix/Makefile.in
index 9f77e38..da24e2f 100644
--- a/src/testsuite/regression/posix/Makefile.in
+++ b/src/testsuite/regression/posix/Makefile.in
@@ -37,7 +37,7 @@ build_triplet = @build@
 host_triplet = @host@
 target_triplet = @target@
 tst_PROGRAMS = leaks$(EXEEXT) shm$(EXEEXT) mprotect$(EXEEXT) \
-   nano_test$(EXEEXT)
+   nano_test$(EXEEXT) xddp_test$(EXEEXT)
 subdir = src/testsuite/regression/posix
 DIST_COMMON = $(noinst_HEADERS) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in
@@ -78,6 +78,11 @@ shm_OBJECTS = shm.$(OBJEXT)
 shm_LDADD = $(LDADD)
 shm_DEPENDENCIES = ../../../skins/posix/libpthread_rt.la \
../../../skins/common/libxenomai.la
+xddp_test_SOURCES = xddp_test.c
+xddp_test_OBJECTS = xddp_test.$(OBJEXT)
+xddp_test_LDADD = $(LDADD)
+xddp_test_DEPENDENCIES = ../../../skins/posix/libpthread_rt.la \
+   ../../../skins/common/libxenomai.la
 DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir)/src/include
 depcomp = $(SHELL) $(top_srcdir)/config/depcomp
 am__depfiles_maybe = depfiles
@@ -90,8 +95,8 @@ LTCOMPILE = $(LIBTOOL) --tag=CC $(AM_LIBTOOLFLAGS) 
$(LIBTOOLFLAGS) \
 LINK = $(LIBTOOL) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \
--mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) \
$(LDFLAGS) -o $@
-SOURCES = leaks.c mprotect.c nano_test.c shm.c
-DIST_SOURCES = leaks.c mprotect.c nano_test.c shm.c
+SOURCES = leaks.c mprotect.c nano_test.c shm.c xddp_test.c
+DIST_SOURCES = leaks.c mprotect.c nano_test.c shm.c xddp_test.c
 HEADERS = $(noinst_HEADERS)
 ETAGS = etags
 CTAGS = ctags
@@ -358,6 +363,9 @@ nano_test$(EXEEXT): $(nano_test_OBJECTS) 
$(nano_test_DEPENDENCIES)
 shm$(EXEEXT): $(shm_OBJECTS) $(shm_DEPENDENCIES) 
@rm -f shm$(EXEEXT)
$(LINK) $(shm_OBJECTS) $(shm_LDADD) $(LIBS)
+xddp_test$(EXEEXT): $(xddp_test_OBJECTS) $(xddp_test_DEPENDENCIES) 
+   @rm -f xddp_test$(EXEEXT)
+   $(LINK) $(xddp_test_OBJECTS) $(xddp_test_LDADD) $(LIBS)
 
 mostlyclean-compile:
-rm -f *.$(OBJEXT)
@@ -369,6 +377,7 @@ distclean-compile:
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/mprotect.Po@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/nano_test.Po@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/shm.Po@am__quote@
+@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/xddp_test.Po@am__quote@
 
 .c.o:
 @am__fastdepCC_TRUE@   $(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ 
$<
diff --git a/src/testsuite/regression/posix/check.h 
b/src/testsuite/regression/posix/check.h
index 63dc4a3..5530e8a 100644
--- a/src/testsuite/regression/posix/check.h
+++ b/src/testsuite/regression/posix/check.h
@@ -10,7 +10,7 @@
({  \
int rc = (expr);\
if (rc > 0) {  

Re: [Xenomai] segfault using rt_printf service

2012-09-18 Thread Gilles Chanteperdrix
On 09/14/2012 08:41 PM, Gilles Chanteperdrix wrote:

> On 09/14/2012 04:33 PM, Alessio Margan @ IIT wrote:
> 
>> Hi all,
>>
>> I'm switching from xenomai 2.5.6 to 2.6.1 using 
>> adeos-ipipe-2.6.38.8-x86-2.11-01.patch
>> In this test I have 2 threads :
>> - rx_udp receive udp packets from dsp boards at 1kHz
>> - boards_test send udp packets at 1kHz
>>
>> I got segfault in printer_loop thread, the point is that if I change env 
>> var RT_PRINT_PERIOD to about 10 or 1000 (default is 100 ms) I do not 
>> have segfault.
>>
>> Any suggestion ?
> 
> 
> Could you try the following patch?
> 
> diff --git a/src/skins/common/rt_print.c b/src/skins/common/rt_print.c
> index a9fce78..376330b 100644
> --- a/src/skins/common/rt_print.c
> +++ b/src/skins/common/rt_print.c
> @@ -163,9 +163,9 @@ static int vprint_to_buffer(FILE *stream, int priority, 
> unsigned int mode,
>   if (mode == RT_PRINT_MODE_FORMAT) {
>   if (stream != RT_PRINT_SYSLOG_STREAM) {
>   /* We do not need the terminating \0 */
> - res = vsnprintf(head->data, len + 1, format, args);
> + res = vsnprintf(head->data, len, format, args);
>  
> - if (res < len + 1) {
> + if (res < len) {
>   /* Text was written completely, res contains its
>  length */
>   len = res;
> 
> 


ping ?

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [GIT PULL] core-5 for x86

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 05:27 PM, Wolfgang Mauerer wrote:

> On 18/09/12 16:25, Gilles Chanteperdrix wrote:
>> On 09/18/2012 04:11 PM, Wolfgang Mauerer wrote:
>>> Dear all,
>>>
>>> here's a rebase of the x86-specific bits of core-4 to core-5. I've
>>> included all x86 specific changes that are not yet in core-5, and
>>> also added the patches I sent earlier for core-4. I did not include a
>>> separate patch for the mechanical changes required to apply the
>>> x86 base patch on top of core-5, but can surely do so if desired.
>>
>> I am not quite finished with x86 on 3.4. So, I would like to start 3.5
>> from the finishing point on 3.4. There are already commits in my branch
>> which you did not take:
>>
>> http://git.xenomai.org/?p=ipipe-gch.git;a=shortlog;h=refs/heads/for-core-3.4
> that's true; my last pull was too old. I'll add the corresponding
> commits to the tree (FYI, the purpose of this tree is mainly to do some
> experiments with the latest ipipe release and the latest kernel, and
> I wanted to make sure that work is not duplicated in case someone else
> is pursuing similar goals)


Ok. We have a currently pending issue on x86 which you should be
informed about before discovering it during your tests: using
rthal_supported_cpus is broken in I-pipe core patches when using the
LAPIC timer: since there is only one irq handler for all the LAPIC
timers, the handler is registered on all cpus, but on non started cpus,
the handler will do nothing at best, and not foward the LAPIC ticks to
Linux (which is still in control of the LAPIC timer on these cpus).

This problem is due to the fact that we keep the same vector as Linux,
and so the same irq. There are two ways out of this:

- change the LAPIC vector when xenomai takes the control of the LAPIC
timer, like we use to do, this is racy with current code because the
timer is taken by Xenomai but still used a bit by Linux, before it is
programmed by Xenomai, and Xenomai assumes that the host tick irq is the
same as the timer irq. All this can be fixed, but the last drawback of
this approach is that it does not fix the issue on architectures where
the local timer irq is the same on all cpus, but can not be changed,
hence the second approach;
- the second approach is to add a test at the beginning of
xnintr_clock_handler and forward the irq to the root domain if the
current cpu does not belong to xnarch_supported_cpus. This means some
patching of I-pipe timers so that ipipe_percpu.hrtimer_irq also gets
defined for non supported cpus when they use the timer shared with other
cpus, essentially what this patch tries (but fails) to achieve:

http://www.xenomai.org/pipermail/xenomai/2012-September/026066.html


-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 09:18 PM, Jan Kiszka wrote:

> On 2012-09-18 21:12, Gilles Chanteperdrix wrote:
>> On 09/18/2012 09:10 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-18 21:05, Gilles Chanteperdrix wrote:
 rah, vectors_limit is not needed at all. I do not see which look you are
 talking about.
>>>
>>> cfg = irq_cfg(irq);
>>> if (!cpumask_test_cpu(cpu, cfg->domain))
>>> per_cpu(vector_irq, cpu)[vector] = -1;
>>
>>
>> Yes, but :
>>
>> -for (vector = 0; vector < NR_VECTORS; ++vector) {
>> +for (vector = 0; vector < first_system_vector; ++vector) {
>>
> 
> And you know all side effects of that change?
> 
> My point is: We have a working version, released and tested on machines
> that make use of the affected code paths.


Come on, look at the code:

for (vector = 0; vector < first_system_vector; ++vector) {
if (vector == IRQ_MOVE_CLEANUP_VECTOR)
continue;

irq = per_cpu(vector_irq, cpu)[vector];
if (irq < 0)
continue;

cfg = irq_cfg(irq);
if (!cpumask_test_cpu(cpu, cfg->domain))
per_cpu(vector_irq, cpu)[vector] = -1;
}

It simply skips an initialization to -1 because we know that we already
initialized this part of the array elsewhere.

I do not claim that it has no bug, but you can test this approach too,
and it will have been tested too...


-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Jan Kiszka
On 2012-09-18 21:12, Gilles Chanteperdrix wrote:
> On 09/18/2012 09:10 PM, Jan Kiszka wrote:
> 
>> On 2012-09-18 21:05, Gilles Chanteperdrix wrote:
>>> rah, vectors_limit is not needed at all. I do not see which look you are
>>> talking about.
>>
>>  cfg = irq_cfg(irq);
>>  if (!cpumask_test_cpu(cpu, cfg->domain))
>>  per_cpu(vector_irq, cpu)[vector] = -1;
> 
> 
> Yes, but :
> 
> - for (vector = 0; vector < NR_VECTORS; ++vector) {
> + for (vector = 0; vector < first_system_vector; ++vector) {
> 

And you know all side effects of that change?

My point is: We have a working version, released and tested on machines
that make use of the affected code paths.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 09:10 PM, Jan Kiszka wrote:

> On 2012-09-18 21:01, Gilles Chanteperdrix wrote:
>> On 09/18/2012 08:55 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-18 20:31, Gilles Chanteperdrix wrote:
 On 09/18/2012 08:25 PM, Jan Kiszka wrote:

> On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
>> On 09/18/2012 07:25 PM, Jan Kiszka wrote:
>>> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
 On 09/16/2012 10:50 AM, Philippe Gerum wrote:
> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
>> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>>
>>> Hi there!
>>>
>>> While testing ipipe-core3.2 on an SMP x86 machine, I found a 
>>> reproducible
>>> kernel BUG after some seconds after starting irqbalance:
>>>
>>> [ cut here ]
>>> kernel BUG at arch/x86/kernel/ipipe.c:592!
>>> invalid opcode:  [#1] SMP
>>> CPU 0
>>> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
>>> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor 
>>> xor async_memcpy async_raid6_recov usbhid hid mpt2sas 
>>> scsi_transport_sas raid_class igb raid6_pq async_tx raid1 raid0 
>>> multipath linear
>>>
>>> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
>>> Healthcare Sector MARS 2.1/X8DTH
>>> RIP: 0010:[]  [] 
>>> __ipipe_handle_irq+0x1bc/0x1d0
>>> RSP: 0018:8177bbe0  EFLAGS: 00010086
>>> RAX: d880 RBX:  RCX: 0092
>>> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
>>> RBP: 8177bc00 R08: 0001 R09: 
>>> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
>>> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
>>> FS:  () GS:880624e0() 
>>> knlGS:
>>> CS:  0010 DS:  ES:  CR0: 8005003b
>>> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
>>> DR0:  DR1:  DR2: 
>>> DR3:  DR6: 0ff0 DR7: 0400
>>> Process swapper/0 (pid: 0, threadinfo 81778000, task 
>>> 81787020)
>>> Stack:
>>>   8177bfd8 0063 880624e1f9a8
>>>  8177bca8 815a44dd 8177bc18 8177bca8
>>>  815a373b 0029fbc4 880624eba570 
>>> Call Trace:
>>>  [] irq_move_cleanup_interrupt+0x5d/0x90
>>>  [] ? call_softirq+0x19/0x30
>>>  [] do_softirq+0xc5/0x100
>>>  [] irq_exit+0xd5/0xf0
>>>  [] do_IRQ+0x6f/0x100
>>>  [] ? __entry_text_end+0x5/0x5
>>>  [] __ipipe_do_IRQ+0x83/0xa0
>>>  [] ? __ipipe_do_IRQ+0x89/0xa0
>>>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>>>  [] __ipipe_dispatch_irq+0xe9/0x210
>>>  [] __ipipe_handle_irq+0x71/0x1d0
>>>  [] common_interrupt+0x60/0x81
>>>  [] ? __ipipe_halt_root+0x34/0x50
>>>  [] ? __ipipe_halt_root+0x27/0x50
>>>  [] default_idle+0x66/0x1a0
>>>  [] cpu_idle+0xaf/0x100
>>>  [] rest_init+0x72/0x80
>>>  [] start_kernel+0x3b4/0x3bf
>>>  [] x86_64_start_reservations+0x131/0x135
>>>  [] x86_64_start_kernel+0x131/0x138
>>> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 
>>> 00 00 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff 
>>> ff <0f> 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
>>> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>>>  RSP  
>>>
>>> This seems to be caused by a missing entry for 
>>> IRQ_MOVE_CLEANUP_VECTOR
>>> in the per_cpu array vector_irq[].
>>>
>>> I found that ipipe_init_vector_irq() (which used to add the needed
>>> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
>>> happened when porting from 2.6.38 to 3.1 - at least I can still see 
>>> the
>>> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't 
>>> find
>>> any x86-branch in-between).
>>
>>
>> If I understand correctly, ipipe_init_vector_irq is no longer needed
>> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
>> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... 
>> except
>> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
>> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
>> __ipipe_handle_irq as well.
>>
>>
>
> This 

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 09:10 PM, Jan Kiszka wrote:

> On 2012-09-18 21:05, Gilles Chanteperdrix wrote:
>> rah, vectors_limit is not needed at all. I do not see which look you are
>> talking about.
> 
>   cfg = irq_cfg(irq);
>   if (!cpumask_test_cpu(cpu, cfg->domain))
>   per_cpu(vector_irq, cpu)[vector] = -1;


Yes, but :

 -  for (vector = 0; vector < NR_VECTORS; ++vector) {
 +  for (vector = 0; vector < first_system_vector; ++vector) {


-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Jan Kiszka
On 2012-09-18 21:05, Gilles Chanteperdrix wrote:
> On 09/18/2012 08:55 PM, Jan Kiszka wrote:
> 
>> On 2012-09-18 20:31, Gilles Chanteperdrix wrote:
>>> On 09/18/2012 08:25 PM, Jan Kiszka wrote:
>>>
 On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
> On 09/18/2012 07:25 PM, Jan Kiszka wrote:
>> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
>>> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
 On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>
>> Hi there!
>>
>> While testing ipipe-core3.2 on an SMP x86 machine, I found a 
>> reproducible
>> kernel BUG after some seconds after starting irqbalance:
>>
>> [ cut here ]
>> kernel BUG at arch/x86/kernel/ipipe.c:592!
>> invalid opcode:  [#1] SMP
>> CPU 0
>> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
>> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor 
>> xor async_memcpy async_raid6_recov usbhid hid mpt2sas 
>> scsi_transport_sas raid_class igb raid6_pq async_tx raid1 raid0 
>> multipath linear
>>
>> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
>> Healthcare Sector MARS 2.1/X8DTH
>> RIP: 0010:[]  [] 
>> __ipipe_handle_irq+0x1bc/0x1d0
>> RSP: 0018:8177bbe0  EFLAGS: 00010086
>> RAX: d880 RBX:  RCX: 0092
>> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
>> RBP: 8177bc00 R08: 0001 R09: 
>> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
>> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
>> FS:  () GS:880624e0() 
>> knlGS:
>> CS:  0010 DS:  ES:  CR0: 8005003b
>> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
>> DR0:  DR1:  DR2: 
>> DR3:  DR6: 0ff0 DR7: 0400
>> Process swapper/0 (pid: 0, threadinfo 81778000, task 
>> 81787020)
>> Stack:
>>   8177bfd8 0063 880624e1f9a8
>>  8177bca8 815a44dd 8177bc18 8177bca8
>>  815a373b 0029fbc4 880624eba570 
>> Call Trace:
>>  [] irq_move_cleanup_interrupt+0x5d/0x90
>>  [] ? call_softirq+0x19/0x30
>>  [] do_softirq+0xc5/0x100
>>  [] irq_exit+0xd5/0xf0
>>  [] do_IRQ+0x6f/0x100
>>  [] ? __entry_text_end+0x5/0x5
>>  [] __ipipe_do_IRQ+0x83/0xa0
>>  [] ? __ipipe_do_IRQ+0x89/0xa0
>>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>>  [] __ipipe_dispatch_irq+0xe9/0x210
>>  [] __ipipe_handle_irq+0x71/0x1d0
>>  [] common_interrupt+0x60/0x81
>>  [] ? __ipipe_halt_root+0x34/0x50
>>  [] ? __ipipe_halt_root+0x27/0x50
>>  [] default_idle+0x66/0x1a0
>>  [] cpu_idle+0xaf/0x100
>>  [] rest_init+0x72/0x80
>>  [] start_kernel+0x3b4/0x3bf
>>  [] x86_64_start_reservations+0x131/0x135
>>  [] x86_64_start_kernel+0x131/0x138
>> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 
>> 00 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff 
>> <0f> 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
>> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>>  RSP  
>>
>> This seems to be caused by a missing entry for 
>> IRQ_MOVE_CLEANUP_VECTOR
>> in the per_cpu array vector_irq[].
>>
>> I found that ipipe_init_vector_irq() (which used to add the needed
>> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
>> happened when porting from 2.6.38 to 3.1 - at least I can still see 
>> the
>> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't 
>> find
>> any x86-branch in-between).
>
>
> If I understand correctly, ipipe_init_vector_irq is no longer needed
> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
> __ipipe_handle_irq as well.
>
>

 This is correct, but unfortunately, upstream reshuffles the IRQ vector
 space every so often, and does not restrict the special vectors to the
 s

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Jan Kiszka
On 2012-09-18 21:01, Gilles Chanteperdrix wrote:
> On 09/18/2012 08:55 PM, Jan Kiszka wrote:
> 
>> On 2012-09-18 20:31, Gilles Chanteperdrix wrote:
>>> On 09/18/2012 08:25 PM, Jan Kiszka wrote:
>>>
 On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
> On 09/18/2012 07:25 PM, Jan Kiszka wrote:
>> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
>>> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
 On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>
>> Hi there!
>>
>> While testing ipipe-core3.2 on an SMP x86 machine, I found a 
>> reproducible
>> kernel BUG after some seconds after starting irqbalance:
>>
>> [ cut here ]
>> kernel BUG at arch/x86/kernel/ipipe.c:592!
>> invalid opcode:  [#1] SMP
>> CPU 0
>> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
>> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor 
>> xor async_memcpy async_raid6_recov usbhid hid mpt2sas 
>> scsi_transport_sas raid_class igb raid6_pq async_tx raid1 raid0 
>> multipath linear
>>
>> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
>> Healthcare Sector MARS 2.1/X8DTH
>> RIP: 0010:[]  [] 
>> __ipipe_handle_irq+0x1bc/0x1d0
>> RSP: 0018:8177bbe0  EFLAGS: 00010086
>> RAX: d880 RBX:  RCX: 0092
>> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
>> RBP: 8177bc00 R08: 0001 R09: 
>> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
>> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
>> FS:  () GS:880624e0() 
>> knlGS:
>> CS:  0010 DS:  ES:  CR0: 8005003b
>> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
>> DR0:  DR1:  DR2: 
>> DR3:  DR6: 0ff0 DR7: 0400
>> Process swapper/0 (pid: 0, threadinfo 81778000, task 
>> 81787020)
>> Stack:
>>   8177bfd8 0063 880624e1f9a8
>>  8177bca8 815a44dd 8177bc18 8177bca8
>>  815a373b 0029fbc4 880624eba570 
>> Call Trace:
>>  [] irq_move_cleanup_interrupt+0x5d/0x90
>>  [] ? call_softirq+0x19/0x30
>>  [] do_softirq+0xc5/0x100
>>  [] irq_exit+0xd5/0xf0
>>  [] do_IRQ+0x6f/0x100
>>  [] ? __entry_text_end+0x5/0x5
>>  [] __ipipe_do_IRQ+0x83/0xa0
>>  [] ? __ipipe_do_IRQ+0x89/0xa0
>>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>>  [] __ipipe_dispatch_irq+0xe9/0x210
>>  [] __ipipe_handle_irq+0x71/0x1d0
>>  [] common_interrupt+0x60/0x81
>>  [] ? __ipipe_halt_root+0x34/0x50
>>  [] ? __ipipe_halt_root+0x27/0x50
>>  [] default_idle+0x66/0x1a0
>>  [] cpu_idle+0xaf/0x100
>>  [] rest_init+0x72/0x80
>>  [] start_kernel+0x3b4/0x3bf
>>  [] x86_64_start_reservations+0x131/0x135
>>  [] x86_64_start_kernel+0x131/0x138
>> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 
>> 00 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff 
>> <0f> 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
>> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>>  RSP  
>>
>> This seems to be caused by a missing entry for 
>> IRQ_MOVE_CLEANUP_VECTOR
>> in the per_cpu array vector_irq[].
>>
>> I found that ipipe_init_vector_irq() (which used to add the needed
>> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
>> happened when porting from 2.6.38 to 3.1 - at least I can still see 
>> the
>> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't 
>> find
>> any x86-branch in-between).
>
>
> If I understand correctly, ipipe_init_vector_irq is no longer needed
> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
> __ipipe_handle_irq as well.
>
>

 This is correct, but unfortunately, upstream reshuffles the IRQ vector
 space every so often, and does not restrict the special vectors to the
 s

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 08:55 PM, Jan Kiszka wrote:

> On 2012-09-18 20:31, Gilles Chanteperdrix wrote:
>> On 09/18/2012 08:25 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
 On 09/18/2012 07:25 PM, Jan Kiszka wrote:
> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
>> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
>>> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
 On 09/11/2012 05:56 PM, Gernot Hillier wrote:

> Hi there!
>
> While testing ipipe-core3.2 on an SMP x86 machine, I found a 
> reproducible
> kernel BUG after some seconds after starting irqbalance:
>
> [ cut here ]
> kernel BUG at arch/x86/kernel/ipipe.c:592!
> invalid opcode:  [#1] SMP
> CPU 0
> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor xor 
> async_memcpy async_raid6_recov usbhid hid mpt2sas scsi_transport_sas 
> raid_class igb raid6_pq async_tx raid1 raid0 multipath linear
>
> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
> Healthcare Sector MARS 2.1/X8DTH
> RIP: 0010:[]  [] 
> __ipipe_handle_irq+0x1bc/0x1d0
> RSP: 0018:8177bbe0  EFLAGS: 00010086
> RAX: d880 RBX:  RCX: 0092
> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
> RBP: 8177bc00 R08: 0001 R09: 
> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
> FS:  () GS:880624e0() 
> knlGS:
> CS:  0010 DS:  ES:  CR0: 8005003b
> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
> DR0:  DR1:  DR2: 
> DR3:  DR6: 0ff0 DR7: 0400
> Process swapper/0 (pid: 0, threadinfo 81778000, task 
> 81787020)
> Stack:
>   8177bfd8 0063 880624e1f9a8
>  8177bca8 815a44dd 8177bc18 8177bca8
>  815a373b 0029fbc4 880624eba570 
> Call Trace:
>  [] irq_move_cleanup_interrupt+0x5d/0x90
>  [] ? call_softirq+0x19/0x30
>  [] do_softirq+0xc5/0x100
>  [] irq_exit+0xd5/0xf0
>  [] do_IRQ+0x6f/0x100
>  [] ? __entry_text_end+0x5/0x5
>  [] __ipipe_do_IRQ+0x83/0xa0
>  [] ? __ipipe_do_IRQ+0x89/0xa0
>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>  [] __ipipe_dispatch_irq+0xe9/0x210
>  [] __ipipe_handle_irq+0x71/0x1d0
>  [] common_interrupt+0x60/0x81
>  [] ? __ipipe_halt_root+0x34/0x50
>  [] ? __ipipe_halt_root+0x27/0x50
>  [] default_idle+0x66/0x1a0
>  [] cpu_idle+0xaf/0x100
>  [] rest_init+0x72/0x80
>  [] start_kernel+0x3b4/0x3bf
>  [] x86_64_start_reservations+0x131/0x135
>  [] x86_64_start_kernel+0x131/0x138
> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 
> 00 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff 
> <0f> 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>  RSP  
>
> This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
> in the per_cpu array vector_irq[].
>
> I found that ipipe_init_vector_irq() (which used to add the needed
> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
> happened when porting from 2.6.38 to 3.1 - at least I can still see 
> the
> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't 
> find
> any x86-branch in-between).


 If I understand correctly, ipipe_init_vector_irq is no longer needed
 because the I-pipe core uses ipipe_apic_vector_irq for vectors above
 FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
 IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
 should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
 __ipipe_handle_irq as well.


>>>
>>> This is correct, but unfortunately, upstream reshuffles the IRQ vector
>>> space every so often, and does not restrict the special vectors to the
>>> system range anymore. Therefore, we should re-introduce a post-setup
>>> routine for the vector->irq map, grouping all fixups we need in one 
>>> place.
>>>
>> 

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 08:55 PM, Jan Kiszka wrote:

> On 2012-09-18 20:31, Gilles Chanteperdrix wrote:
>> On 09/18/2012 08:25 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
 On 09/18/2012 07:25 PM, Jan Kiszka wrote:
> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
>> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
>>> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
 On 09/11/2012 05:56 PM, Gernot Hillier wrote:

> Hi there!
>
> While testing ipipe-core3.2 on an SMP x86 machine, I found a 
> reproducible
> kernel BUG after some seconds after starting irqbalance:
>
> [ cut here ]
> kernel BUG at arch/x86/kernel/ipipe.c:592!
> invalid opcode:  [#1] SMP
> CPU 0
> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor xor 
> async_memcpy async_raid6_recov usbhid hid mpt2sas scsi_transport_sas 
> raid_class igb raid6_pq async_tx raid1 raid0 multipath linear
>
> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
> Healthcare Sector MARS 2.1/X8DTH
> RIP: 0010:[]  [] 
> __ipipe_handle_irq+0x1bc/0x1d0
> RSP: 0018:8177bbe0  EFLAGS: 00010086
> RAX: d880 RBX:  RCX: 0092
> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
> RBP: 8177bc00 R08: 0001 R09: 
> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
> FS:  () GS:880624e0() 
> knlGS:
> CS:  0010 DS:  ES:  CR0: 8005003b
> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
> DR0:  DR1:  DR2: 
> DR3:  DR6: 0ff0 DR7: 0400
> Process swapper/0 (pid: 0, threadinfo 81778000, task 
> 81787020)
> Stack:
>   8177bfd8 0063 880624e1f9a8
>  8177bca8 815a44dd 8177bc18 8177bca8
>  815a373b 0029fbc4 880624eba570 
> Call Trace:
>  [] irq_move_cleanup_interrupt+0x5d/0x90
>  [] ? call_softirq+0x19/0x30
>  [] do_softirq+0xc5/0x100
>  [] irq_exit+0xd5/0xf0
>  [] do_IRQ+0x6f/0x100
>  [] ? __entry_text_end+0x5/0x5
>  [] __ipipe_do_IRQ+0x83/0xa0
>  [] ? __ipipe_do_IRQ+0x89/0xa0
>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>  [] __ipipe_dispatch_irq+0xe9/0x210
>  [] __ipipe_handle_irq+0x71/0x1d0
>  [] common_interrupt+0x60/0x81
>  [] ? __ipipe_halt_root+0x34/0x50
>  [] ? __ipipe_halt_root+0x27/0x50
>  [] default_idle+0x66/0x1a0
>  [] cpu_idle+0xaf/0x100
>  [] rest_init+0x72/0x80
>  [] start_kernel+0x3b4/0x3bf
>  [] x86_64_start_reservations+0x131/0x135
>  [] x86_64_start_kernel+0x131/0x138
> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 
> 00 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff 
> <0f> 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>  RSP  
>
> This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
> in the per_cpu array vector_irq[].
>
> I found that ipipe_init_vector_irq() (which used to add the needed
> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
> happened when porting from 2.6.38 to 3.1 - at least I can still see 
> the
> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't 
> find
> any x86-branch in-between).


 If I understand correctly, ipipe_init_vector_irq is no longer needed
 because the I-pipe core uses ipipe_apic_vector_irq for vectors above
 FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
 IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
 should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
 __ipipe_handle_irq as well.


>>>
>>> This is correct, but unfortunately, upstream reshuffles the IRQ vector
>>> space every so often, and does not restrict the special vectors to the
>>> system range anymore. Therefore, we should re-introduce a post-setup
>>> routine for the vector->irq map, grouping all fixups we need in one 
>>> place.
>>>
>> 

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Jan Kiszka
On 2012-09-18 20:31, Gilles Chanteperdrix wrote:
> On 09/18/2012 08:25 PM, Jan Kiszka wrote:
> 
>> On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
>>> On 09/18/2012 07:25 PM, Jan Kiszka wrote:
 On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
>> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
>>> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>>>
 Hi there!

 While testing ipipe-core3.2 on an SMP x86 machine, I found a 
 reproducible
 kernel BUG after some seconds after starting irqbalance:

 [ cut here ]
 kernel BUG at arch/x86/kernel/ipipe.c:592!
 invalid opcode:  [#1] SMP
 CPU 0
 Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
 edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor xor 
 async_memcpy async_raid6_recov usbhid hid mpt2sas scsi_transport_sas 
 raid_class igb raid6_pq async_tx raid1 raid0 multipath linear

 Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
 Healthcare Sector MARS 2.1/X8DTH
 RIP: 0010:[]  [] 
 __ipipe_handle_irq+0x1bc/0x1d0
 RSP: 0018:8177bbe0  EFLAGS: 00010086
 RAX: d880 RBX:  RCX: 0092
 RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
 RBP: 8177bc00 R08: 0001 R09: 
 R10: 880624ebaef8 R11: 0029fbc4 R12: d880
 R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
 FS:  () GS:880624e0() 
 knlGS:
 CS:  0010 DS:  ES:  CR0: 8005003b
 CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
 DR0:  DR1:  DR2: 
 DR3:  DR6: 0ff0 DR7: 0400
 Process swapper/0 (pid: 0, threadinfo 81778000, task 
 81787020)
 Stack:
   8177bfd8 0063 880624e1f9a8
  8177bca8 815a44dd 8177bc18 8177bca8
  815a373b 0029fbc4 880624eba570 
 Call Trace:
  [] irq_move_cleanup_interrupt+0x5d/0x90
  [] ? call_softirq+0x19/0x30
  [] do_softirq+0xc5/0x100
  [] irq_exit+0xd5/0xf0
  [] do_IRQ+0x6f/0x100
  [] ? __entry_text_end+0x5/0x5
  [] __ipipe_do_IRQ+0x83/0xa0
  [] ? __ipipe_do_IRQ+0x89/0xa0
  [] __ipipe_dispatch_irq_fast+0x16a/0x170
  [] __ipipe_dispatch_irq+0xe9/0x210
  [] __ipipe_handle_irq+0x71/0x1d0
  [] common_interrupt+0x60/0x81
  [] ? __ipipe_halt_root+0x34/0x50
  [] ? __ipipe_halt_root+0x27/0x50
  [] default_idle+0x66/0x1a0
  [] cpu_idle+0xaf/0x100
  [] rest_init+0x72/0x80
  [] start_kernel+0x3b4/0x3bf
  [] x86_64_start_reservations+0x131/0x135
  [] x86_64_start_kernel+0x131/0x138
 Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 
 00 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff <0f> 
 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
 RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
  RSP  

 This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
 in the per_cpu array vector_irq[].

 I found that ipipe_init_vector_irq() (which used to add the needed
 entry) was factored out from arch/x86/kernel/ipipe.c. This likely
 happened when porting from 2.6.38 to 3.1 - at least I can still see the
 code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't 
 find
 any x86-branch in-between).
>>>
>>>
>>> If I understand correctly, ipipe_init_vector_irq is no longer needed
>>> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
>>> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
>>> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
>>> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
>>> __ipipe_handle_irq as well.
>>>
>>>
>>
>> This is correct, but unfortunately, upstream reshuffles the IRQ vector
>> space every so often, and does not restrict the special vectors to the
>> system range anymore. Therefore, we should re-introduce a post-setup
>> routine for the vector->irq map, grouping all fixups we need in one 
>> place.
>>
> I propose the following change:
> http://git.xenomai.org/?p=ipipe-gch.git;a=commitdiff;h=9d131dc33080cda3f7e40342210d9338dc0c3d02
>
> Which avo

Re: [Xenomai] native skin example with rt_pipe_monitor() ?

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 04:19 PM, Michael Wisse wrote:

> Thanks for the link to the API documentation, but I'm looking for an 

> example.

The rt_pipe API is deprecated. Non deprecated APIs such as XDDP sockets
have examples, see:
examples/rtdm/profiles/ipc/xddp-*

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 08:25 PM, Jan Kiszka wrote:

> On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
>> On 09/18/2012 07:25 PM, Jan Kiszka wrote:
>>> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
 On 09/16/2012 10:50 AM, Philippe Gerum wrote:
> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
>> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>>
>>> Hi there!
>>>
>>> While testing ipipe-core3.2 on an SMP x86 machine, I found a 
>>> reproducible
>>> kernel BUG after some seconds after starting irqbalance:
>>>
>>> [ cut here ]
>>> kernel BUG at arch/x86/kernel/ipipe.c:592!
>>> invalid opcode:  [#1] SMP
>>> CPU 0
>>> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
>>> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor xor 
>>> async_memcpy async_raid6_recov usbhid hid mpt2sas scsi_transport_sas 
>>> raid_class igb raid6_pq async_tx raid1 raid0 multipath linear
>>>
>>> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
>>> Healthcare Sector MARS 2.1/X8DTH
>>> RIP: 0010:[]  [] 
>>> __ipipe_handle_irq+0x1bc/0x1d0
>>> RSP: 0018:8177bbe0  EFLAGS: 00010086
>>> RAX: d880 RBX:  RCX: 0092
>>> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
>>> RBP: 8177bc00 R08: 0001 R09: 
>>> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
>>> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
>>> FS:  () GS:880624e0() 
>>> knlGS:
>>> CS:  0010 DS:  ES:  CR0: 8005003b
>>> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
>>> DR0:  DR1:  DR2: 
>>> DR3:  DR6: 0ff0 DR7: 0400
>>> Process swapper/0 (pid: 0, threadinfo 81778000, task 
>>> 81787020)
>>> Stack:
>>>   8177bfd8 0063 880624e1f9a8
>>>  8177bca8 815a44dd 8177bc18 8177bca8
>>>  815a373b 0029fbc4 880624eba570 
>>> Call Trace:
>>>  [] irq_move_cleanup_interrupt+0x5d/0x90
>>>  [] ? call_softirq+0x19/0x30
>>>  [] do_softirq+0xc5/0x100
>>>  [] irq_exit+0xd5/0xf0
>>>  [] do_IRQ+0x6f/0x100
>>>  [] ? __entry_text_end+0x5/0x5
>>>  [] __ipipe_do_IRQ+0x83/0xa0
>>>  [] ? __ipipe_do_IRQ+0x89/0xa0
>>>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>>>  [] __ipipe_dispatch_irq+0xe9/0x210
>>>  [] __ipipe_handle_irq+0x71/0x1d0
>>>  [] common_interrupt+0x60/0x81
>>>  [] ? __ipipe_halt_root+0x34/0x50
>>>  [] ? __ipipe_halt_root+0x27/0x50
>>>  [] default_idle+0x66/0x1a0
>>>  [] cpu_idle+0xaf/0x100
>>>  [] rest_init+0x72/0x80
>>>  [] start_kernel+0x3b4/0x3bf
>>>  [] x86_64_start_reservations+0x131/0x135
>>>  [] x86_64_start_kernel+0x131/0x138
>>> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 00 
>>> 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff <0f> 0b 
>>> 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
>>> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>>>  RSP  
>>>
>>> This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
>>> in the per_cpu array vector_irq[].
>>>
>>> I found that ipipe_init_vector_irq() (which used to add the needed
>>> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
>>> happened when porting from 2.6.38 to 3.1 - at least I can still see the
>>> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't find
>>> any x86-branch in-between).
>>
>>
>> If I understand correctly, ipipe_init_vector_irq is no longer needed
>> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
>> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
>> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
>> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
>> __ipipe_handle_irq as well.
>>
>>
>
> This is correct, but unfortunately, upstream reshuffles the IRQ vector
> space every so often, and does not restrict the special vectors to the
> system range anymore. Therefore, we should re-introduce a post-setup
> routine for the vector->irq map, grouping all fixups we need in one place.
>
 I propose the following change:
 http://git.xenomai.org/?p=ipipe-gch.git;a=commitdiff;h=9d131dc33080cda3f7e40342210d9338dc0c3d02

 Which avoids listing explicitely the vectors we want to intercept, and
 so, should allow some changes to happen in the kernel without having to
 care too much (except for 

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Jan Kiszka
On 2012-09-18 19:28, Gilles Chanteperdrix wrote:
> On 09/18/2012 07:25 PM, Jan Kiszka wrote:
>> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
>>> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
 On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>
>> Hi there!
>>
>> While testing ipipe-core3.2 on an SMP x86 machine, I found a reproducible
>> kernel BUG after some seconds after starting irqbalance:
>>
>> [ cut here ]
>> kernel BUG at arch/x86/kernel/ipipe.c:592!
>> invalid opcode:  [#1] SMP
>> CPU 0
>> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
>> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor xor 
>> async_memcpy async_raid6_recov usbhid hid mpt2sas scsi_transport_sas 
>> raid_class igb raid6_pq async_tx raid1 raid0 multipath linear
>>
>> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
>> Healthcare Sector MARS 2.1/X8DTH
>> RIP: 0010:[]  [] 
>> __ipipe_handle_irq+0x1bc/0x1d0
>> RSP: 0018:8177bbe0  EFLAGS: 00010086
>> RAX: d880 RBX:  RCX: 0092
>> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
>> RBP: 8177bc00 R08: 0001 R09: 
>> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
>> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
>> FS:  () GS:880624e0() 
>> knlGS:
>> CS:  0010 DS:  ES:  CR0: 8005003b
>> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
>> DR0:  DR1:  DR2: 
>> DR3:  DR6: 0ff0 DR7: 0400
>> Process swapper/0 (pid: 0, threadinfo 81778000, task 
>> 81787020)
>> Stack:
>>   8177bfd8 0063 880624e1f9a8
>>  8177bca8 815a44dd 8177bc18 8177bca8
>>  815a373b 0029fbc4 880624eba570 
>> Call Trace:
>>  [] irq_move_cleanup_interrupt+0x5d/0x90
>>  [] ? call_softirq+0x19/0x30
>>  [] do_softirq+0xc5/0x100
>>  [] irq_exit+0xd5/0xf0
>>  [] do_IRQ+0x6f/0x100
>>  [] ? __entry_text_end+0x5/0x5
>>  [] __ipipe_do_IRQ+0x83/0xa0
>>  [] ? __ipipe_do_IRQ+0x89/0xa0
>>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>>  [] __ipipe_dispatch_irq+0xe9/0x210
>>  [] __ipipe_handle_irq+0x71/0x1d0
>>  [] common_interrupt+0x60/0x81
>>  [] ? __ipipe_halt_root+0x34/0x50
>>  [] ? __ipipe_halt_root+0x27/0x50
>>  [] default_idle+0x66/0x1a0
>>  [] cpu_idle+0xaf/0x100
>>  [] rest_init+0x72/0x80
>>  [] start_kernel+0x3b4/0x3bf
>>  [] x86_64_start_reservations+0x131/0x135
>>  [] x86_64_start_kernel+0x131/0x138
>> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 00 
>> 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff <0f> 0b 
>> 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
>> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>>  RSP  
>>
>> This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
>> in the per_cpu array vector_irq[].
>>
>> I found that ipipe_init_vector_irq() (which used to add the needed
>> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
>> happened when porting from 2.6.38 to 3.1 - at least I can still see the
>> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't find
>> any x86-branch in-between).
>
>
> If I understand correctly, ipipe_init_vector_irq is no longer needed
> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
> __ipipe_handle_irq as well.
>
>

 This is correct, but unfortunately, upstream reshuffles the IRQ vector
 space every so often, and does not restrict the special vectors to the
 system range anymore. Therefore, we should re-introduce a post-setup
 routine for the vector->irq map, grouping all fixups we need in one place.

>>> I propose the following change:
>>> http://git.xenomai.org/?p=ipipe-gch.git;a=commitdiff;h=9d131dc33080cda3f7e40342210d9338dc0c3d02
>>>
>>> Which avoids listing explicitely the vectors we want to intercept, and
>>> so, should allow some changes to happen in the kernel without having to
>>> care too much (except for vectors sur as IRQ_MOVE_CLEANUP_VECTOR which
>>> do not go through alloc_intr_gate, but this vector is the only exception,
>>> for now).
>>
>> Crashes on

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 07:25 PM, Jan Kiszka wrote:
> On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
>> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
>>> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
 On 09/11/2012 05:56 PM, Gernot Hillier wrote:

> Hi there!
>
> While testing ipipe-core3.2 on an SMP x86 machine, I found a reproducible
> kernel BUG after some seconds after starting irqbalance:
>
> [ cut here ]
> kernel BUG at arch/x86/kernel/ipipe.c:592!
> invalid opcode:  [#1] SMP
> CPU 0
> Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 
> edac_core cifs serio_raw joydev raid10 raid456 async_pq async_xor xor 
> async_memcpy async_raid6_recov usbhid hid mpt2sas scsi_transport_sas 
> raid_class igb raid6_pq async_tx raid1 raid0 multipath linear
>
> Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
> Healthcare Sector MARS 2.1/X8DTH
> RIP: 0010:[]  [] 
> __ipipe_handle_irq+0x1bc/0x1d0
> RSP: 0018:8177bbe0  EFLAGS: 00010086
> RAX: d880 RBX:  RCX: 0092
> RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
> RBP: 8177bc00 R08: 0001 R09: 
> R10: 880624ebaef8 R11: 0029fbc4 R12: d880
> R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
> FS:  () GS:880624e0() 
> knlGS:
> CS:  0010 DS:  ES:  CR0: 8005003b
> CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
> DR0:  DR1:  DR2: 
> DR3:  DR6: 0ff0 DR7: 0400
> Process swapper/0 (pid: 0, threadinfo 81778000, task 
> 81787020)
> Stack:
>   8177bfd8 0063 880624e1f9a8
>  8177bca8 815a44dd 8177bc18 8177bca8
>  815a373b 0029fbc4 880624eba570 
> Call Trace:
>  [] irq_move_cleanup_interrupt+0x5d/0x90
>  [] ? call_softirq+0x19/0x30
>  [] do_softirq+0xc5/0x100
>  [] irq_exit+0xd5/0xf0
>  [] do_IRQ+0x6f/0x100
>  [] ? __entry_text_end+0x5/0x5
>  [] __ipipe_do_IRQ+0x83/0xa0
>  [] ? __ipipe_do_IRQ+0x89/0xa0
>  [] __ipipe_dispatch_irq_fast+0x16a/0x170
>  [] __ipipe_dispatch_irq+0xe9/0x210
>  [] __ipipe_handle_irq+0x71/0x1d0
>  [] common_interrupt+0x60/0x81
>  [] ? __ipipe_halt_root+0x34/0x50
>  [] ? __ipipe_halt_root+0x27/0x50
>  [] default_idle+0x66/0x1a0
>  [] cpu_idle+0xaf/0x100
>  [] rest_init+0x72/0x80
>  [] start_kernel+0x3b4/0x3bf
>  [] x86_64_start_reservations+0x131/0x135
>  [] x86_64_start_kernel+0x131/0x138
> Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 00 
> 00 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff <0f> 0b 66 
> 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
> RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
>  RSP  
>
> This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
> in the per_cpu array vector_irq[].
>
> I found that ipipe_init_vector_irq() (which used to add the needed
> entry) was factored out from arch/x86/kernel/ipipe.c. This likely
> happened when porting from 2.6.38 to 3.1 - at least I can still see the
> code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't find
> any x86-branch in-between).


 If I understand correctly, ipipe_init_vector_irq is no longer needed
 because the I-pipe core uses ipipe_apic_vector_irq for vectors above
 FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
 IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
 should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
 __ipipe_handle_irq as well.


>>>
>>> This is correct, but unfortunately, upstream reshuffles the IRQ vector
>>> space every so often, and does not restrict the special vectors to the
>>> system range anymore. Therefore, we should re-introduce a post-setup
>>> routine for the vector->irq map, grouping all fixups we need in one place.
>>>
>> I propose the following change:
>> http://git.xenomai.org/?p=ipipe-gch.git;a=commitdiff;h=9d131dc33080cda3f7e40342210d9338dc0c3d02
>>
>> Which avoids listing explicitely the vectors we want to intercept, and
>> so, should allow some changes to happen in the kernel without having to
>> care too much (except for vectors sur as IRQ_MOVE_CLEANUP_VECTOR which
>> do not go through alloc_intr_gate, but this vector is the only exception,
>> for now).
> 
> Crashes on boot, SMP at least. Investigating.

Well, I tested it in SMP, so, the crash is probably due to some option I
could not activate (such as IRQ_REMAP, the

Re: [Xenomai] ipipe/x86: kernel BUG due to missing IRQ_MOVE_CLEANUP_VECTOR entry in ipipe-core3.2

2012-09-18 Thread Jan Kiszka
On 2012-09-17 10:13, Gilles Chanteperdrix wrote:
> On 09/16/2012 10:50 AM, Philippe Gerum wrote:
>> On 09/16/2012 12:26 AM, Gilles Chanteperdrix wrote:
>>> On 09/11/2012 05:56 PM, Gernot Hillier wrote:
>>>
 Hi there!

 While testing ipipe-core3.2 on an SMP x86 machine, I found a reproducible
 kernel BUG after some seconds after starting irqbalance:

 [ cut here ]
 kernel BUG at arch/x86/kernel/ipipe.c:592!
 invalid opcode:  [#1] SMP
 CPU 0
 Modules linked in: des_generic md4 i7core_edac psmouse nls_cp437 edac_core 
 cifs serio_raw joydev raid10 raid456 async_pq async_xor xor async_memcpy 
 async_raid6_recov usbhid hid mpt2sas scsi_transport_sas raid_class igb 
 raid6_pq async_tx raid1 raid0 multipath linear

 Pid: 0, comm: swapper/0 Not tainted 3.2.21-9-xenomai #3 Siemens AG 
 Healthcare Sector MARS 2.1/X8DTH
 RIP: 0010:[]  [] 
 __ipipe_handle_irq+0x1bc/0x1d0
 RSP: 0018:8177bbe0  EFLAGS: 00010086
 RAX: d880 RBX:  RCX: 0092
 RDX: ffdf RSI: 8177bc18 RDI: 8177bbf8
 RBP: 8177bc00 R08: 0001 R09: 
 R10: 880624ebaef8 R11: 0029fbc4 R12: d880
 R13: 8177bbf8 R14: 880624e0 R15: 880624e0d880
 FS:  () GS:880624e0() 
 knlGS:
 CS:  0010 DS:  ES:  CR0: 8005003b
 CR2: 7f452a2efb80 CR3: 000c114d3000 CR4: 06f0
 DR0:  DR1:  DR2: 
 DR3:  DR6: 0ff0 DR7: 0400
 Process swapper/0 (pid: 0, threadinfo 81778000, task 
 81787020)
 Stack:
   8177bfd8 0063 880624e1f9a8
  8177bca8 815a44dd 8177bc18 8177bca8
  815a373b 0029fbc4 880624eba570 
 Call Trace:
  [] irq_move_cleanup_interrupt+0x5d/0x90
  [] ? call_softirq+0x19/0x30
  [] do_softirq+0xc5/0x100
  [] irq_exit+0xd5/0xf0
  [] do_IRQ+0x6f/0x100
  [] ? __entry_text_end+0x5/0x5
  [] __ipipe_do_IRQ+0x83/0xa0
  [] ? __ipipe_do_IRQ+0x89/0xa0
  [] __ipipe_dispatch_irq_fast+0x16a/0x170
  [] __ipipe_dispatch_irq+0xe9/0x210
  [] __ipipe_handle_irq+0x71/0x1d0
  [] common_interrupt+0x60/0x81
  [] ? __ipipe_halt_root+0x34/0x50
  [] ? __ipipe_halt_root+0x27/0x50
  [] default_idle+0x66/0x1a0
  [] cpu_idle+0xaf/0x100
  [] rest_init+0x72/0x80
  [] start_kernel+0x3b4/0x3bf
  [] x86_64_start_reservations+0x131/0x135
  [] x86_64_start_kernel+0x131/0x138
 Code: ff ff 0f 1f 44 00 00 48 83 a0 98 06 00 00 fe 4c 89 ee bf 20 00 00 00 
 e8 63 83 09 00 e9 f6 fe ff ff be 01 00 00 00 e9 ab fe ff ff <0f> 0b 66 90 
 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
 RIP  [] __ipipe_handle_irq+0x1bc/0x1d0
  RSP  

 This seems to be caused by a missing entry for IRQ_MOVE_CLEANUP_VECTOR
 in the per_cpu array vector_irq[].

 I found that ipipe_init_vector_irq() (which used to add the needed
 entry) was factored out from arch/x86/kernel/ipipe.c. This likely
 happened when porting from 2.6.38 to 3.1 - at least I can still see the
 code in ipipe-2.6.38-x86 and missed it in ipipe-core3.1 (and didn't find
 any x86-branch in-between).
>>>
>>>
>>> If I understand correctly, ipipe_init_vector_irq is no longer needed
>>> because the I-pipe core uses ipipe_apic_vector_irq for vectors above
>>> FIRST_SYSTEM_VECTOR. All system vectors are above this limit... except
>>> IRQ_MOVE_CLEANUP_VECTOR. So, your patch is correct. Another one which
>>> should work is to handle the special case IRQ_MOVE_CLEANUP_IRQ in
>>> __ipipe_handle_irq as well.
>>>
>>>
>>
>> This is correct, but unfortunately, upstream reshuffles the IRQ vector
>> space every so often, and does not restrict the special vectors to the
>> system range anymore. Therefore, we should re-introduce a post-setup
>> routine for the vector->irq map, grouping all fixups we need in one place.
>>
> I propose the following change:
> http://git.xenomai.org/?p=ipipe-gch.git;a=commitdiff;h=9d131dc33080cda3f7e40342210d9338dc0c3d02
> 
> Which avoids listing explicitely the vectors we want to intercept, and
> so, should allow some changes to happen in the kernel without having to
> care too much (except for vectors sur as IRQ_MOVE_CLEANUP_VECTOR which
> do not go through alloc_intr_gate, but this vector is the only exception,
> for now).

Crashes on boot, SMP at least. Investigating.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [GIT PULL] core-5 for x86

2012-09-18 Thread Wolfgang Mauerer
On 18/09/12 16:25, Gilles Chanteperdrix wrote:
> On 09/18/2012 04:11 PM, Wolfgang Mauerer wrote:
>> Dear all,
>>
>> here's a rebase of the x86-specific bits of core-4 to core-5. I've
>> included all x86 specific changes that are not yet in core-5, and
>> also added the patches I sent earlier for core-4. I did not include a
>> separate patch for the mechanical changes required to apply the
>> x86 base patch on top of core-5, but can surely do so if desired.
> 
> I am not quite finished with x86 on 3.4. So, I would like to start 3.5
> from the finishing point on 3.4. There are already commits in my branch
> which you did not take:
> 
> http://git.xenomai.org/?p=ipipe-gch.git;a=shortlog;h=refs/heads/for-core-3.4
that's true; my last pull was too old. I'll add the corresponding
commits to the tree (FYI, the purpose of this tree is mainly to do some
experiments with the latest ipipe release and the latest kernel, and
I wanted to make sure that work is not duplicated in case someone else
is pursuing similar goals)
> 
> This is assuming that I am the (flaky subsitute for a) maintainer of the
> x86 architecture. Of course, if someone wants to take over the
> maintenance of the x86 architecture, I am gladly returning to the ARMs.
> 
> 4-x86 applied to core-5
>>   x86/ipipe: Make io_apic_level_ack_pending available for ipipe
> 
> What is this commit? Neither the text nor the diff are very explicit.
yes, this commit is fairly big considering its small effect. I've updated
the description as follows:

Make sure that io_apic_level_ack_pending() is compiled in when ipipe is
configured. Also move the implementation downwards so that it is
not referenced before it is defined.

Best regards, Wolfgang

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [GIT PULL] core-5 for x86

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 04:11 PM, Wolfgang Mauerer wrote:
> Dear all,
> 
> here's a rebase of the x86-specific bits of core-4 to core-5. I've
> included all x86 specific changes that are not yet in core-5, and
> also added the patches I sent earlier for core-4. I did not include a
> separate patch for the mechanical changes required to apply the
> x86 base patch on top of core-5, but can surely do so if desired.

I am not quite finished with x86 on 3.4. So, I would like to start 3.5
from the finishing point on 3.4. There are already commits in my branch
which you did not take:

http://git.xenomai.org/?p=ipipe-gch.git;a=shortlog;h=refs/heads/for-core-3.4

This is assuming that I am the (flaky subsitute for a) maintainer of the
x86 architecture. Of course, if someone wants to take over the
maintenance of the x86 architecture, I am gladly returning to the ARMs.


4-x86 applied to core-5
>   x86/ipipe: Make io_apic_level_ack_pending available for ipipe

What is this commit? Neither the text nor the diff are very explicit.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] native skin example with rt_pipe_monitor() ?

2012-09-18 Thread Michael Wisse
Thanks for the link to the API documentation, but I'm looking for an 
example.


Michael


Am 18.09.2012 15:56, schrieb Philippe Gerum:

On 09/18/2012 03:49 PM, Michael Wisse wrote:

Hello,

I'm looking for an example for  rt_pipe_monitor(...),
pipe.c does not contain using a handler.

Can someone help me?


http://www.xenomai.org/documentation/xenomai-2.6/html/api/group__pipe.html#ga944600f54dc78a77badeda77f3af732d


Regards
Michael

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai






___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] [GIT PULL] core-5 for x86

2012-09-18 Thread Wolfgang Mauerer
Dear all,

here's a rebase of the x86-specific bits of core-4 to core-5. I've
included all x86 specific changes that are not yet in core-5, and
also added the patches I sent earlier for core-4. I did not include a
separate patch for the mechanical changes required to apply the
x86 base patch on top of core-5, but can surely do so if desired.

Cheers, Wolfgang

The following changes since commit 9f90a923093f411908e3536088dfdb1d936f3fe8:


  ipipe: ipipe_timers_request -> ipipe_select_timers (2012-09-03 11:19:15 +0200)

are available in the git repository at:
  https://github.com/siemens/ipipe.git core-3.5_for-upstream

Gilles Chanteperdrix (5):
  ipipe/x86: provide ipipe_head_switch_mm
  ipipe/x86: do not restore during context switch
  ipipe/x86: mask io_apic EOI irq before EOI 
  ipipe/x86: fix IOAPIC with CONFIG_IRQ_REMAP
  ipipe/x86: fix compilation for AMD processors  

Philippe Gerum (1):
  x86/ipipe: ipipe_head_switch_mm -> ipipe_switch_mm_head

Wolfgang Mauerer (5):
  ipipe-core4-x86 applied to core-5
  x86/ipipe: Make io_apic_level_ack_pending available for ipipe
  ipipe: Remove superfluous symbol export of irq_to_desc   
  ipipe,x86: Introduce hard_irqs_disabled_flags
  Fix IRQs-off-tracer for x86_64   

 .gitignore|1 +
 arch/x86/Kconfig  |   26 +-
 arch/x86/include/asm/apic.h   |6 + 
 arch/x86/include/asm/apicdef.h|3 + 
 arch/x86/include/asm/fpu-internal.h   |   10 + 
 arch/x86/include/asm/hw_irq.h |   10 + 
 arch/x86/include/asm/i8259.h  |2 +-
 arch/x86/include/asm/ipi.h|   23 +-
 arch/x86/include/asm/ipipe.h  |  105 ++
 arch/x86/include/asm/ipipe_32.h   |   86 + 
 arch/x86/include/asm/ipipe_64.h   |   90 + 
 arch/x86/include/asm/ipipe_base.h |  226 +++
 arch/x86/include/asm/irq_vectors.h|   11 +  
 arch/x86/include/asm/irqflags.h   |  210 +++
 arch/x86/include/asm/mmu_context.h|   21 +- 
 arch/x86/include/asm/page_64_types.h  |4 +  
 arch/x86/include/asm/processor.h  |1 +  
 arch/x86/include/asm/special_insns.h  |   12 +  
 arch/x86/include/asm/switch_to.h  |7 +- 
 arch/x86/include/asm/thread_info.h|2 +  
 arch/x86/include/asm/traps.h  |2 +- 
 arch/x86/include/asm/tsc.h|1 +  
 arch/x86/kernel/Makefile  |1 +  
 arch/x86/kernel/apic/apic.c   |   39 ++-
 arch/x86/kernel/apic/apic_flat_64.c   |4 +- 
 arch/x86/kernel/apic/io_apic.c|  219 ++--
 arch/x86/kernel/apic/ipi.c|   20 +-  
 arch/x86/kernel/apic/x2apic_cluster.c |4 +-  
 arch/x86/kernel/apic/x2apic_phys.c|4 +-  
 arch/x86/kernel/cpu/mtrr/cyrix.c  |   12 +-  
 arch/x86/kernel/cpu/mtrr/generic.c|   12 +-  
 arch/x86/kernel/dumpstack_32.c|3 +   
 arch/x86/kernel/dumpstack_64.c|5 +   
 arch/x86/kernel/entry_32.S|  147 ++--
 arch/x86/kernel/entry_64.S|  247 +++--
 arch/x86/kernel/hpet.c|   27 ++-  
 arch/x86/kernel/i387.c|3 +
 arch/x86/kernel/i8259.c   |   30 ++-
 arch/x86/kernel/ipipe.c   |  664 +
 arch/x86/kernel/irq.c |7 +-
 arch/x86/kernel/irqinit.c |7 +
 arch/x86/kernel/process.c |   21 +-
 arch/x86/kernel/process_32.c  |4 +-
 arch/x86/kernel/process_64.c  |9 +-
 arch/x86/kernel/ptrace.c  |5 +
 arch/x86/kernel/smp.c |4 +-
 arch/x86/kernel/smpboot.c |   10 +-
 arch/x86/kernel/traps.c   |4 +
 arch/x86/kernel/tsc.c |   12 +-
 arch/x86/kernel/vm86_32.c |4 +
 arch/x86/kernel/vsyscall_64.c |4 +
 arch/x86/kvm/svm.c|4 +-
 arch/x86/kvm/vmx.c|   13 +-
 arch/x86/kvm/x86.c|   69 +++-
 arch/x86/lib/mmx_32.c |2 +-
 arch/x86/lib/thunk_64.S   |   21 +
 arch/x86/mm/fault.c   |   56 +++-
 arch/x86/mm/tlb.c |7 +
 arch/x86/platform/uv/tlb_uv.c |5 +
 59 files changed, 2384 insertions(+), 184 deletions(-)
 create mode 100644 arch/x86/include/asm/ipipe.h
 create mode 100644 arch/x86/include/asm/ipipe_32.h
 create mode 100644 arch/x86/include/asm/ipipe_64.h
 create mode 100644 arch/x86/include/asm/ipipe_base.h
 create mode 100644 arch/x86/kernel/ipipe.c

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] native skin example with rt_pipe_monitor() ?

2012-09-18 Thread Philippe Gerum
On 09/18/2012 03:49 PM, Michael Wisse wrote:
> Hello,
> 
> I'm looking for an example for  rt_pipe_monitor(...),
> pipe.c does not contain using a handler.
> 
> Can someone help me?
> 

http://www.xenomai.org/documentation/xenomai-2.6/html/api/group__pipe.html#ga944600f54dc78a77badeda77f3af732d

> Regards
> Michael
> 
> ___
> Xenomai mailing list
> Xenomai@xenomai.org
> http://www.xenomai.org/mailman/listinfo/xenomai
> 


-- 
Philippe.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


[Xenomai] native skin example with rt_pipe_monitor() ?

2012-09-18 Thread Michael Wisse

Hello,

I'm looking for an example for  rt_pipe_monitor(...),
pipe.c does not contain using a handler.

Can someone help me?

Regards
Michael

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] IO-APIC latencies

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 11:30 AM, Jan Kiszka wrote:
> On 2012-09-18 11:06, Gilles Chanteperdrix wrote:
>> On 09/18/2012 10:48 AM, Jan Kiszka wrote:
>>> On 2012-09-17 23:50, Gilles Chanteperdrix wrote:
 On 09/17/2012 08:54 PM, Jan Kiszka wrote:

> On 2012-09-17 20:37, Gilles Chanteperdrix wrote:
>> On 09/17/2012 08:29 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-17 20:18, Gilles Chanteperdrix wrote:
 On 09/17/2012 08:15 PM, Jan Kiszka wrote:

> On 2012-09-17 20:12, Jan Kiszka wrote:
>> On 2012-09-17 20:08, Gilles Chanteperdrix wrote:
>>> On 09/17/2012 08:05 PM, Jan Kiszka wrote:
>>>
 On 2012-09-17 19:46, Gilles Chanteperdrix wrote:
> ipipe_end is a nop when called from primary domain, yes, but this 
> is not
> very different from edge irqs. Also, fasteoi become a bit like 
> MSI: in
> the same way as we can not mask MSI from primary domain, we 
> should not
> mask IO-APIC fasteoi irqs, because the cost is too prohibitive. 
> If we
> can live with MSI without masking them in primary mode, I guess 
> we can
> do the same with fasteoi irqs.

 MSIs are edge triggered, fasteois are still level-based. They 
 require
 masking at the point you defer them - what we do and what Linux 
 may even
 extend beyond that. If you mask them by raising the task priority, 
 you
 have to keep it raised until Linux finally handled the IRQ.
>>>
>>>
>>> Yes.
>>>
 Or you
 decide to mask it at IO-APIC level again.
>>>
>>>
>>> We do not want that.
>>>
 If you keep the TPR raised,
 you will block more than what Linux wants to block.
>>>
>>>
>>> The point is that if the TPR keeps raised, it means that primary 
>>> domain
>>> has preempted Linux, so, we want it to keep that way. Otherwise the 
>>> TPR
>>> gets lowered when Linux has handled the interrupt.
>>>
>>> A week-end of testing made me sure of one thing: it works. I assure 
>>> you.
>>
>> Probably, in the absence of IRQF_ONESHOT Linux interrupts. No longer 
>> if
>> you face threaded IRQs - I assure you.
>
> Well, it may work (if mask/unmask callbacks work as native) but the
> benefit is gone: masking at IO-APIC level will be done again. Given 
> that
> threaded IRQs become increasingly popular, it will also be hard to 
> avoid
> them in common setups.


 The thing is, if we no longer use the IO-APIC spinlock from primary
 domain, we may not have to turn it into an ipipe_spinlock, and may be
 able to preempt the IO-APIC masking.
>>>
>>> That might be true - but is the latency related to the lock or the
>>> hardware access? In the latter case, you will still stall the CPU on it
>>> and have to isolate the load on a non-RT CPU again.
>>>
>>> BTW, the task priority for the RT domain is a quite important parameter.
>>> If you put it too low, Linux can run out of vectors. If you put it too
>>> high, the same may happen to Xenomai - on bigger boxes.
>>
>>
>> Yes, and there are only 16 levels. But Xenomai does not need to many 
>> levels.
>
> How is telling you this? It's part of the system setup. And that may
> lean toward RT or toward non-RT. This level should be adjusted according
> to the current allocation of Linux and the RT domain for a particular
> CPU, not hard-coded or compile-time defined.


 In theory, I agree, in practice, lets be crasy, assume someone would
 want an RT serial driver with 4 irqs, an RT USB driver with 2 irqs, an
 RT CAN driver, and say, 4 RTnet boards. That is still less than the 16
 vectors that a single level provides, so, we can probably get along with
 2 levels. Or we can use a kernel parameter.
>>>
>>> Linux - and so should we do - allocates separate levels first as that
>>> provides better performance for external interrupts (need to look up the
>>> precise reason, should be documented in the x86 code). Only if levels
>>> are used up, interrupts will share them.
>>
>> I have seen this code, and I wondered if it was not, in fact, only
>> useful, where the irq flow handler were reenabling irqs (that is, before
>> the removal of IRQF_DISABLED), but am really not sure.
> 
> This pattern is still present with IRQF_ONESHOT, aka threaded IRQs.

No, from what I understand, it is different: with threaded IRQS, the
flow handler masks irqs then sends the EOI. So, the APIC does not nest.

If you re-enable the hardware interrupts before sending the EOI, you
cause th

Re: [Xenomai] IO-APIC latencies

2012-09-18 Thread Jan Kiszka
On 2012-09-18 11:06, Gilles Chanteperdrix wrote:
> On 09/18/2012 10:48 AM, Jan Kiszka wrote:
>> On 2012-09-17 23:50, Gilles Chanteperdrix wrote:
>>> On 09/17/2012 08:54 PM, Jan Kiszka wrote:
>>>
 On 2012-09-17 20:37, Gilles Chanteperdrix wrote:
> On 09/17/2012 08:29 PM, Jan Kiszka wrote:
>
>> On 2012-09-17 20:18, Gilles Chanteperdrix wrote:
>>> On 09/17/2012 08:15 PM, Jan Kiszka wrote:
>>>
 On 2012-09-17 20:12, Jan Kiszka wrote:
> On 2012-09-17 20:08, Gilles Chanteperdrix wrote:
>> On 09/17/2012 08:05 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-17 19:46, Gilles Chanteperdrix wrote:
 ipipe_end is a nop when called from primary domain, yes, but this 
 is not
 very different from edge irqs. Also, fasteoi become a bit like 
 MSI: in
 the same way as we can not mask MSI from primary domain, we should 
 not
 mask IO-APIC fasteoi irqs, because the cost is too prohibitive. If 
 we
 can live with MSI without masking them in primary mode, I guess we 
 can
 do the same with fasteoi irqs.
>>>
>>> MSIs are edge triggered, fasteois are still level-based. They 
>>> require
>>> masking at the point you defer them - what we do and what Linux may 
>>> even
>>> extend beyond that. If you mask them by raising the task priority, 
>>> you
>>> have to keep it raised until Linux finally handled the IRQ.
>>
>>
>> Yes.
>>
>>> Or you
>>> decide to mask it at IO-APIC level again.
>>
>>
>> We do not want that.
>>
>>> If you keep the TPR raised,
>>> you will block more than what Linux wants to block.
>>
>>
>> The point is that if the TPR keeps raised, it means that primary 
>> domain
>> has preempted Linux, so, we want it to keep that way. Otherwise the 
>> TPR
>> gets lowered when Linux has handled the interrupt.
>>
>> A week-end of testing made me sure of one thing: it works. I assure 
>> you.
>
> Probably, in the absence of IRQF_ONESHOT Linux interrupts. No longer 
> if
> you face threaded IRQs - I assure you.

 Well, it may work (if mask/unmask callbacks work as native) but the
 benefit is gone: masking at IO-APIC level will be done again. Given 
 that
 threaded IRQs become increasingly popular, it will also be hard to 
 avoid
 them in common setups.
>>>
>>>
>>> The thing is, if we no longer use the IO-APIC spinlock from primary
>>> domain, we may not have to turn it into an ipipe_spinlock, and may be
>>> able to preempt the IO-APIC masking.
>>
>> That might be true - but is the latency related to the lock or the
>> hardware access? In the latter case, you will still stall the CPU on it
>> and have to isolate the load on a non-RT CPU again.
>>
>> BTW, the task priority for the RT domain is a quite important parameter.
>> If you put it too low, Linux can run out of vectors. If you put it too
>> high, the same may happen to Xenomai - on bigger boxes.
>
>
> Yes, and there are only 16 levels. But Xenomai does not need to many 
> levels.

 How is telling you this? It's part of the system setup. And that may
 lean toward RT or toward non-RT. This level should be adjusted according
 to the current allocation of Linux and the RT domain for a particular
 CPU, not hard-coded or compile-time defined.
>>>
>>>
>>> In theory, I agree, in practice, lets be crasy, assume someone would
>>> want an RT serial driver with 4 irqs, an RT USB driver with 2 irqs, an
>>> RT CAN driver, and say, 4 RTnet boards. That is still less than the 16
>>> vectors that a single level provides, so, we can probably get along with
>>> 2 levels. Or we can use a kernel parameter.
>>
>> Linux - and so should we do - allocates separate levels first as that
>> provides better performance for external interrupts (need to look up the
>> precise reason, should be documented in the x86 code). Only if levels
>> are used up, interrupts will share them.
> 
> I have seen this code, and I wondered if it was not, in fact, only
> useful, where the irq flow handler were reenabling irqs (that is, before
> the removal of IRQF_DISABLED), but am really not sure.

This pattern is still present with IRQF_ONESHOT, aka threaded IRQs.

> 
> Also, some additional results on my atom:
> the IO-APIC is on IO controller HUB, which is... an ICH4 if I read lspci
> and the datasheets correctly. And what is more, its registers are
> accessed through the (slow) LPC bus, the ISA bus replacement. It is
> probably the reason why it is so slow.

Yes, I was expecting some architectural limitation like this.

> 
> A

Re: [Xenomai] IO-APIC latencies

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 11:06 AM, Gilles Chanteperdrix wrote:
> On 09/18/2012 10:48 AM, Jan Kiszka wrote:
>> On 2012-09-17 23:50, Gilles Chanteperdrix wrote:
>>> On 09/17/2012 08:54 PM, Jan Kiszka wrote:
>>>
 On 2012-09-17 20:37, Gilles Chanteperdrix wrote:
> On 09/17/2012 08:29 PM, Jan Kiszka wrote:
>
>> On 2012-09-17 20:18, Gilles Chanteperdrix wrote:
>>> On 09/17/2012 08:15 PM, Jan Kiszka wrote:
>>>
 On 2012-09-17 20:12, Jan Kiszka wrote:
> On 2012-09-17 20:08, Gilles Chanteperdrix wrote:
>> On 09/17/2012 08:05 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-17 19:46, Gilles Chanteperdrix wrote:
 ipipe_end is a nop when called from primary domain, yes, but this 
 is not
 very different from edge irqs. Also, fasteoi become a bit like 
 MSI: in
 the same way as we can not mask MSI from primary domain, we should 
 not
 mask IO-APIC fasteoi irqs, because the cost is too prohibitive. If 
 we
 can live with MSI without masking them in primary mode, I guess we 
 can
 do the same with fasteoi irqs.
>>>
>>> MSIs are edge triggered, fasteois are still level-based. They 
>>> require
>>> masking at the point you defer them - what we do and what Linux may 
>>> even
>>> extend beyond that. If you mask them by raising the task priority, 
>>> you
>>> have to keep it raised until Linux finally handled the IRQ.
>>
>>
>> Yes.
>>
>>> Or you
>>> decide to mask it at IO-APIC level again.
>>
>>
>> We do not want that.
>>
>>> If you keep the TPR raised,
>>> you will block more than what Linux wants to block.
>>
>>
>> The point is that if the TPR keeps raised, it means that primary 
>> domain
>> has preempted Linux, so, we want it to keep that way. Otherwise the 
>> TPR
>> gets lowered when Linux has handled the interrupt.
>>
>> A week-end of testing made me sure of one thing: it works. I assure 
>> you.
>
> Probably, in the absence of IRQF_ONESHOT Linux interrupts. No longer 
> if
> you face threaded IRQs - I assure you.

 Well, it may work (if mask/unmask callbacks work as native) but the
 benefit is gone: masking at IO-APIC level will be done again. Given 
 that
 threaded IRQs become increasingly popular, it will also be hard to 
 avoid
 them in common setups.
>>>
>>>
>>> The thing is, if we no longer use the IO-APIC spinlock from primary
>>> domain, we may not have to turn it into an ipipe_spinlock, and may be
>>> able to preempt the IO-APIC masking.
>>
>> That might be true - but is the latency related to the lock or the
>> hardware access? In the latter case, you will still stall the CPU on it
>> and have to isolate the load on a non-RT CPU again.
>>
>> BTW, the task priority for the RT domain is a quite important parameter.
>> If you put it too low, Linux can run out of vectors. If you put it too
>> high, the same may happen to Xenomai - on bigger boxes.
>
>
> Yes, and there are only 16 levels. But Xenomai does not need to many 
> levels.

 How is telling you this? It's part of the system setup. And that may
 lean toward RT or toward non-RT. This level should be adjusted according
 to the current allocation of Linux and the RT domain for a particular
 CPU, not hard-coded or compile-time defined.
>>>
>>>
>>> In theory, I agree, in practice, lets be crasy, assume someone would
>>> want an RT serial driver with 4 irqs, an RT USB driver with 2 irqs, an
>>> RT CAN driver, and say, 4 RTnet boards. That is still less than the 16
>>> vectors that a single level provides, so, we can probably get along with
>>> 2 levels. Or we can use a kernel parameter.
>>
>> Linux - and so should we do - allocates separate levels first as that
>> provides better performance for external interrupts (need to look up the
>> precise reason, should be documented in the x86 code). Only if levels
>> are used up, interrupts will share them.
> 
> I have seen this code, and I wondered if it was not, in fact, only
> useful, where the irq flow handler were reenabling irqs (that is, before
> the removal of IRQF_DISABLED), but am really not sure.
> 
> Also, some additional results on my atom:
> the IO-APIC is on IO controller HUB, which is... an ICH4 if I read lspci
> and the datasheets correctly. And what is more, its registers are
> accessed through the (slow) LPC bus, the ISA bus replacement. It is
> probably the reason why it is so slow.
> 
> And last but not least, it is not really a multi-core processor, it has
> hyper-threading. Booting the processor in UP mode yields a

Re: [Xenomai] IO-APIC latencies

2012-09-18 Thread Gilles Chanteperdrix
On 09/18/2012 10:48 AM, Jan Kiszka wrote:
> On 2012-09-17 23:50, Gilles Chanteperdrix wrote:
>> On 09/17/2012 08:54 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-17 20:37, Gilles Chanteperdrix wrote:
 On 09/17/2012 08:29 PM, Jan Kiszka wrote:

> On 2012-09-17 20:18, Gilles Chanteperdrix wrote:
>> On 09/17/2012 08:15 PM, Jan Kiszka wrote:
>>
>>> On 2012-09-17 20:12, Jan Kiszka wrote:
 On 2012-09-17 20:08, Gilles Chanteperdrix wrote:
> On 09/17/2012 08:05 PM, Jan Kiszka wrote:
>
>> On 2012-09-17 19:46, Gilles Chanteperdrix wrote:
>>> ipipe_end is a nop when called from primary domain, yes, but this 
>>> is not
>>> very different from edge irqs. Also, fasteoi become a bit like MSI: 
>>> in
>>> the same way as we can not mask MSI from primary domain, we should 
>>> not
>>> mask IO-APIC fasteoi irqs, because the cost is too prohibitive. If 
>>> we
>>> can live with MSI without masking them in primary mode, I guess we 
>>> can
>>> do the same with fasteoi irqs.
>>
>> MSIs are edge triggered, fasteois are still level-based. They require
>> masking at the point you defer them - what we do and what Linux may 
>> even
>> extend beyond that. If you mask them by raising the task priority, 
>> you
>> have to keep it raised until Linux finally handled the IRQ.
>
>
> Yes.
>
>> Or you
>> decide to mask it at IO-APIC level again.
>
>
> We do not want that.
>
>> If you keep the TPR raised,
>> you will block more than what Linux wants to block.
>
>
> The point is that if the TPR keeps raised, it means that primary 
> domain
> has preempted Linux, so, we want it to keep that way. Otherwise the 
> TPR
> gets lowered when Linux has handled the interrupt.
>
> A week-end of testing made me sure of one thing: it works. I assure 
> you.

 Probably, in the absence of IRQF_ONESHOT Linux interrupts. No longer if
 you face threaded IRQs - I assure you.
>>>
>>> Well, it may work (if mask/unmask callbacks work as native) but the
>>> benefit is gone: masking at IO-APIC level will be done again. Given that
>>> threaded IRQs become increasingly popular, it will also be hard to avoid
>>> them in common setups.
>>
>>
>> The thing is, if we no longer use the IO-APIC spinlock from primary
>> domain, we may not have to turn it into an ipipe_spinlock, and may be
>> able to preempt the IO-APIC masking.
>
> That might be true - but is the latency related to the lock or the
> hardware access? In the latter case, you will still stall the CPU on it
> and have to isolate the load on a non-RT CPU again.
>
> BTW, the task priority for the RT domain is a quite important parameter.
> If you put it too low, Linux can run out of vectors. If you put it too
> high, the same may happen to Xenomai - on bigger boxes.


 Yes, and there are only 16 levels. But Xenomai does not need to many 
 levels.
>>>
>>> How is telling you this? It's part of the system setup. And that may
>>> lean toward RT or toward non-RT. This level should be adjusted according
>>> to the current allocation of Linux and the RT domain for a particular
>>> CPU, not hard-coded or compile-time defined.
>>
>>
>> In theory, I agree, in practice, lets be crasy, assume someone would
>> want an RT serial driver with 4 irqs, an RT USB driver with 2 irqs, an
>> RT CAN driver, and say, 4 RTnet boards. That is still less than the 16
>> vectors that a single level provides, so, we can probably get along with
>> 2 levels. Or we can use a kernel parameter.
> 
> Linux - and so should we do - allocates separate levels first as that
> provides better performance for external interrupts (need to look up the
> precise reason, should be documented in the x86 code). Only if levels
> are used up, interrupts will share them.

I have seen this code, and I wondered if it was not, in fact, only
useful, where the irq flow handler were reenabling irqs (that is, before
the removal of IRQF_DISABLED), but am really not sure.

Also, some additional results on my atom:
the IO-APIC is on IO controller HUB, which is... an ICH4 if I read lspci
and the datasheets correctly. And what is more, its registers are
accessed through the (slow) LPC bus, the ISA bus replacement. It is
probably the reason why it is so slow.

And last but not least, it is not really a multi-core processor, it has
hyper-threading. Booting the processor in UP mode yields a much more
reasonable latency of 23us (still with using the TPR), whereas the usual
latency was around 30u (running the test now, will have results at
noon), so, the real gain of using the TPR is in fact much lower than

Re: [Xenomai] IO-APIC latencies

2012-09-18 Thread Jan Kiszka
On 2012-09-17 23:50, Gilles Chanteperdrix wrote:
> On 09/17/2012 08:54 PM, Jan Kiszka wrote:
> 
>> On 2012-09-17 20:37, Gilles Chanteperdrix wrote:
>>> On 09/17/2012 08:29 PM, Jan Kiszka wrote:
>>>
 On 2012-09-17 20:18, Gilles Chanteperdrix wrote:
> On 09/17/2012 08:15 PM, Jan Kiszka wrote:
>
>> On 2012-09-17 20:12, Jan Kiszka wrote:
>>> On 2012-09-17 20:08, Gilles Chanteperdrix wrote:
 On 09/17/2012 08:05 PM, Jan Kiszka wrote:

> On 2012-09-17 19:46, Gilles Chanteperdrix wrote:
>> ipipe_end is a nop when called from primary domain, yes, but this is 
>> not
>> very different from edge irqs. Also, fasteoi become a bit like MSI: 
>> in
>> the same way as we can not mask MSI from primary domain, we should 
>> not
>> mask IO-APIC fasteoi irqs, because the cost is too prohibitive. If we
>> can live with MSI without masking them in primary mode, I guess we 
>> can
>> do the same with fasteoi irqs.
>
> MSIs are edge triggered, fasteois are still level-based. They require
> masking at the point you defer them - what we do and what Linux may 
> even
> extend beyond that. If you mask them by raising the task priority, you
> have to keep it raised until Linux finally handled the IRQ.


 Yes.

> Or you
> decide to mask it at IO-APIC level again.


 We do not want that.

> If you keep the TPR raised,
> you will block more than what Linux wants to block.


 The point is that if the TPR keeps raised, it means that primary domain
 has preempted Linux, so, we want it to keep that way. Otherwise the TPR
 gets lowered when Linux has handled the interrupt.

 A week-end of testing made me sure of one thing: it works. I assure 
 you.
>>>
>>> Probably, in the absence of IRQF_ONESHOT Linux interrupts. No longer if
>>> you face threaded IRQs - I assure you.
>>
>> Well, it may work (if mask/unmask callbacks work as native) but the
>> benefit is gone: masking at IO-APIC level will be done again. Given that
>> threaded IRQs become increasingly popular, it will also be hard to avoid
>> them in common setups.
>
>
> The thing is, if we no longer use the IO-APIC spinlock from primary
> domain, we may not have to turn it into an ipipe_spinlock, and may be
> able to preempt the IO-APIC masking.

 That might be true - but is the latency related to the lock or the
 hardware access? In the latter case, you will still stall the CPU on it
 and have to isolate the load on a non-RT CPU again.

 BTW, the task priority for the RT domain is a quite important parameter.
 If you put it too low, Linux can run out of vectors. If you put it too
 high, the same may happen to Xenomai - on bigger boxes.
>>>
>>>
>>> Yes, and there are only 16 levels. But Xenomai does not need to many levels.
>>
>> How is telling you this? It's part of the system setup. And that may
>> lean toward RT or toward non-RT. This level should be adjusted according
>> to the current allocation of Linux and the RT domain for a particular
>> CPU, not hard-coded or compile-time defined.
> 
> 
> In theory, I agree, in practice, lets be crasy, assume someone would
> want an RT serial driver with 4 irqs, an RT USB driver with 2 irqs, an
> RT CAN driver, and say, 4 RTnet boards. That is still less than the 16
> vectors that a single level provides, so, we can probably get along with
> 2 levels. Or we can use a kernel parameter.

Linux - and so should we do - allocates separate levels first as that
provides better performance for external interrupts (need to look up the
precise reason, should be documented in the x86 code). Only if levels
are used up, interrupts will share them. Out of the 16 we have, about
3-4 should already be occupied by exception and system vectors. And, if
you look at today's NICs e.g., you get around 3 vectors per interface at
least. I have a more or less ordinary one here (single port, no SR-IOV)
with 8(!) per port. So interrupt vector shortage is not that far away.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai