Re: [Xenomai-core] Re: [BUG] deleting a T_SUSP'ed native task

2006-02-15 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
   Hi Gilles,
   
   were you able to successfully run my T_SUSP test-case after the latest
   changes? For me this code still causes fatal exceptions:
 
 This is solved with revision 565, hopefully.
 

It is.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Jeroen Van den Keybus
In a search for the problem, I encountered some code which may be at the root of the problem. In file

arch/i386/kernel/io_apic.cI see that a function mask_and_ack_level_ioapic_vector()
is being defined, whereas the original 2.6.15 code did not ever issue
any IO_APIC calls (both mask_and_ack_level_ioapic and end_edge_ioapic
are void in include/linux/).

Is it possible that this code was transferred with patches for earlier kernels (at least from 2.6.11) ?

I'm going to check this now and hopefully fix it.




[ As a matter of fact, the IO_APIC shouldn't play any role in the
processing of MSI interrupts, which are addressed at (default) addr.
0xFEE0 in the CPU. An exception to thisareinterrupts
issued by PCI cards to the IO_APIC itself (default addr.: 0xFEC00020)
to trigger IRQs 0-23, which is a feature Linux doesn't seem to use and
wasseemingly intended for card mftrs. to support MSI without
changing the drivers. ]



Jeroen.


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Kconfig inconsistencies

2006-02-15 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


I haven't worked out any patch for those issues. Actually, I only wanted
to put this patch forward when stumbling over the other:

--- ksrc/skins/native/Kconfig   (revision 564)
+++ ksrc/skins/native/Kconfig   (working copy)
@@ -127,7 +127,6 @@

config XENO_OPT_NATIVE_INTR
bool Interrupts
-default y
help

This option provides a simple API to deal with interrupts,

Rationale: the /default/ way of handling IRQs should be via RTDM-based
drivers. Only users who know what they are doing should leave this path
and will have to switch on this feature explicitly. If this view can be
commonly accepted, I will add some lines to the feature's help text as
well.



Agreed.




Then apply this one, please. I noticed that 2.4 does not know default
values for bool options, correct?

Jan




Index: ksrc/skins/native/Kconfig
===
--- ksrc/skins/native/Kconfig   (revision 568)
+++ ksrc/skins/native/Kconfig   (working copy)
@@ -128,11 +128,11 @@
 
 config XENO_OPT_NATIVE_INTR

bool Interrupts
-   default y
help

This option provides a simple API to deal with interrupts,
-   both in kernel and user-space contexts. Registry support is
-   required.
+   both in kernel and user-space contexts. Note that the preferred
+   way of implementing generic drivers usable across all Xenomai
+   interfaces is defined by the Real-Time Driver Model (RTDM).
 
 endif


Applied, thanks.

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] separate queue debugging switch

2006-02-15 Thread Philippe Gerum

Jan Kiszka wrote:

Philippe Gerum wrote:


Jan Kiszka wrote:


Philippe Gerum wrote:



Jan Kiszka wrote:



Hi,

while XENO_OPT_DEBUG is generally a useful switch for tracing potential
issues in the core and the skins, it also introduces high latencies via
the queue debugging feature (due to checks iterating over whole
queues).

This patch introduces separate control over queue debugging so that you
can have debug checks without too dramatic slowdowns.



Maybe it's time to introduce debug levels, so that we could reuse them
in order to
add more (selectable) debug instrumentation; queue debugging could then
be given a
certain level (likely something like CONFIG_XENO_DEBUG_LEVEL=8712 for
this one...), instead of going for a specific conditional each time we
introduce new checks?




Hmm, this means someone have to define what should be printed at which
level - tend to be hard decisions... Often it is at least as much useful
to have debug groups so that specific parts can be excluded from
debugging. I'm pro such groups (one would be those queues e.g.) but
contra too many levels (2, at most 3).



Ack, selection by increasingly verbose/high-overhead groups is what I
have in mind.



At this chance, I would also suggest to introduce some ASSERT macro (per
group, per level). That could be used to instrument the core with
runtime checks. But it could also be quickly removed at compilation
time, reducing the code size (e.g. checks at the nucleus layer against
buggy skins or at RTDM layer against rough drivers).



I'm not opposed to that, if we keep the noise / signal ratio of those
assertions at the reasonable low-level throughout the code, and don't
use this to enforce silly parametrical checks.




Then let's discuss how to implement and control this. Say we have some
macros for marking code as depends on debug group X:

#if XENO_DEBUG_GROUP(group)
code;
#endif /* XENO_DEBUG_GROUP(group) */

XENO_IF_DEBUG_GROUP(group, code);

(or do you prefere XNPOD_xxx?)



This debug code may span feature/component boundaries, so XENO_ is better.


Additionally, we could introduce that assertion macro:

XENO_ASSERT(group, expression, failure_code);

But how to control the groups now? Via Kconfig bool options?


Yes, I think so. From some specialized Debug menu in the generic portion. We would 
need this to keep the (unused) debug code out of production systems.


 And what

groups to define? Per subsytem? Or per disturbance level (latency
regression)? If we control the group selection via Kconfig, we could
define pseudo bool options like All debug groups or Low-intrusive
debug groups that select the fitting concrete groups.



We won't be able to anticipate on each and every debug spots we might need in the 
future, and in any case, debug triggers may well span multiple sub-systems. I'd go 
for defining levels depending on the throroughness/complexity of their checks.



Alternatively, we could make the group selection a runtime switch,
controlled via a global bitmask that can be modified through /proc e.g.
Only switching of CONFIG_XENO_OPT_DEBUG would then remove all debugging
code, otherwise the execution of the checks would depend on the current
bitmask content.


We could cumulate this with the static selection.



Jan




--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Shared interrupts (ready to merge)

2006-02-15 Thread Dmitry Adamushko

Hello everybody,

being inspired by successful results of tests conducted recently by Jan  team,
I'm presenting the final (yep, yet another final :) combo-patch.

The shirq support now is optional, so that 

CONFIG_XENO_OPT_SHIRQ_LEVEL - enables shirq for level-triggered interrupts;

CONFIG_XENO_OPT_SHIRQ_EDGE - -//- for edge-triggered ones.

I addressed all the remarks and now, IMHO, it's (hopefully) ready for merge.
-- Best regards,Dmitry Adamushko


shirq-combo.patch
Description: Binary data


shirq-KConfig.patch
Description: Binary data


ChangeLog.patch
Description: Binary data
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] provide rtdm_mmap_to_user / rtdm_munmap

2006-02-15 Thread Rodrigo Rosenfeld Rosas
Em Terça 14 Fevereiro 2006 22:30, Jan Kiszka escreveu:

...
 You cannot mmap before you know precisely for which user this should
 take place.

 Do you mean that if I use the 'current' and current-mm struct of the
 driver, when mmaping, the user won't be able to use the returned pointer?

To mmap you need to know the target process, more precisely its mm. This
is typically derived from the invocation context of the service call
(current is a pointer to the current process). But init_module runs in
the context of modprobe. Even worse, the process later opening and
mapping some buffer may not even exist at that time!

Right, I've already verified this on practice... I'm mmaping on open handler 
for now just for testing the mmap, but I'll change it to the ioctl mmap 
handler.

It seems to work. I mapped high_memory and could read and modify it from user 
space. The memory values mantained betweens the many open calls. I read, 
printed the values and increment them by one. On next time, the value shown 
was incremented... All seems perfect but I still didn't test with real 
acquire code... When I do so, I'll let you know.

I still need to test the vmaops. I think I'll test them tomorrow. I need to 
begin writing an article that my advisor asked me to. I need to finish it 
until march, 10.

Best Regards,

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Philippe Gerum

Jeroen Van den Keybus wrote:
Ok. I´ve found it. The MSI interrupt type uses its end() handler to 
acknowledge the interrupt using ack_APIC_irq() (drivers/pci/msi.c). 
Xenomai uses the ack() handler to expedite the acknowledgement of an 
IRQ. In case of MSI, ack() is a NOP.


The main problem is that Xenomai redefines ack_APIC_irq() calls (they 
become NOPs, as defined in apic.h). Maybe the ISRs used so far never 
issued ack_APIC_irq() themselves, and used always the IO-APIC (which 
contains the correct __ack_APIC_irq() call) ?




Really good spot, I overlooked this issue in the MSI support; thanks for 
digging it.


I feel a bit awkward about changing msi.c .

Any opinions about how to change Xenomai / Linux ?



It's definitely an Adeos issue and msi.c needs fixing. What about this patch, do 
things improve with it (against 2.6.15-ipipe-1.2-00)?


--- msi.c~  2006-01-03 04:21:10.0 +0100
+++ msi.c   2006-02-15 21:02:03.0 +0100
@@ -149,6 +149,15 @@
msi_set_mask_bit(vector, 0);
 }

+#ifdef CONFIG_IPIPE
+static void ack_MSI_irq(unsigned int vector)
+{
+__ack_APIC_irq();
+}
+#else /* !CONFIG_IPIPE */
+#define ack_MSI_irq  mask_MSI_irq
+#endif /* CONFIG_IPIPE */
+
 static unsigned int startup_msi_irq_wo_maskbit(unsigned int vector)
 {
struct msi_desc *entry;
@@ -212,7 +221,7 @@
.shutdown   = shutdown_msi_irq,
.enable = unmask_MSI_irq,
.disable= mask_MSI_irq,
-   .ack= mask_MSI_irq,
+   .ack= ack_MSI_irq,
.end= end_msi_irq_w_maskbit,
.set_affinity   = set_msi_irq_affinity
 };
@@ -228,7 +237,7 @@
.shutdown   = shutdown_msi_irq,
.enable = unmask_MSI_irq,
.disable= mask_MSI_irq,
-   .ack= mask_MSI_irq,
+   .ack= ack_MSI_irq,
.end= end_msi_irq_w_maskbit,
.set_affinity   = set_msi_irq_affinity
 };

--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Philippe Gerum

Jeroen Van den Keybus wrote:

It's definitely an Adeos issue and msi.c needs fixing. What about
this patch, do
things improve with it (against 2.6.15-ipipe-1.2-00)?

I going to try the patch later on. I have currently a ´fully 
instrumented´ kernel against which this patch would not ever work... I´m 
keeping that kernel for now, because I´m also investigating why MSI also 
doesn´t work under RTDM. It´s merely a coincidence that the above bug 
(MSI interrupts from Linux devices getting blocked) emerged and produced 
exactly the same behaviour (system hanging).


But, normally, that path is not used in RT mode, is it ? So something 
else is getting in the way.


At the first look of it, I´m a bit wary of touching that msi.c . I was 
rather thinking of kicking out __ack_APIC() altogether ? Or is that not 
possible ? (I see only problems in p4.c and smp.c - but I haven´t looked 
at these very closely.)





We do need __ack_APIC_irq() to run the actual APIC ack code all over the place in 
the APIC/IO-APIC support code, so that former regular uses of ack_APIC_irq() can 
be left untouched. Adeos already changes significant areas within Linux's innards 
in order to control its interrupt sub-system anyway, which in turn hides the gory 
details of interrupt prioritization to client software like Xenomai. 
drivers/pci/msi.c simply brings a new set of interrupt controllers we need to make 
Adeos-aware, just like it has been done for the i8259, the LAPIC and the IO-APIC 
supports.


--

Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


RE: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Russell Johnson


 It's definitely an Adeos issue and msi.c needs fixing. What 
 about this patch, do 
 things improve with it (against 2.6.15-ipipe-1.2-00)?

I'm currently patching my setup which started with ipipe-2.6.14-i386-1.0-12.
I've been having no luck with any MSI devices in the system even if they
have supposedly had MSI disabled.  I'll post my testing results in the next
day or so.

Russ



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] separate queue debugging switch

2006-02-15 Thread Jan Kiszka
Philippe Gerum wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:

 Jan Kiszka wrote:

 Philippe Gerum wrote:


 Jan Kiszka wrote:


 Hi,

 while XENO_OPT_DEBUG is generally a useful switch for tracing
 potential
 issues in the core and the skins, it also introduces high
 latencies via
 the queue debugging feature (due to checks iterating over whole
 queues).

 This patch introduces separate control over queue debugging so
 that you
 can have debug checks without too dramatic slowdowns.


 Maybe it's time to introduce debug levels, so that we could reuse them
 in order to
 add more (selectable) debug instrumentation; queue debugging could
 then
 be given a
 certain level (likely something like CONFIG_XENO_DEBUG_LEVEL=8712 for
 this one...), instead of going for a specific conditional each time we
 introduce new checks?



 Hmm, this means someone have to define what should be printed at which
 level - tend to be hard decisions... Often it is at least as much
 useful
 to have debug groups so that specific parts can be excluded from
 debugging. I'm pro such groups (one would be those queues e.g.) but
 contra too many levels (2, at most 3).


 Ack, selection by increasingly verbose/high-overhead groups is what I
 have in mind.


 At this chance, I would also suggest to introduce some ASSERT macro
 (per
 group, per level). That could be used to instrument the core with
 runtime checks. But it could also be quickly removed at compilation
 time, reducing the code size (e.g. checks at the nucleus layer against
 buggy skins or at RTDM layer against rough drivers).


 I'm not opposed to that, if we keep the noise / signal ratio of those
 assertions at the reasonable low-level throughout the code, and don't
 use this to enforce silly parametrical checks.



 Then let's discuss how to implement and control this. Say we have some
 macros for marking code as depends on debug group X:

 #if XENO_DEBUG_GROUP(group)
 code;
 #endif /* XENO_DEBUG_GROUP(group) */

 XENO_IF_DEBUG_GROUP(group, code);

 (or do you prefere XNPOD_xxx?)

 
 This debug code may span feature/component boundaries, so XENO_ is better.
 
 Additionally, we could introduce that assertion macro:

 XENO_ASSERT(group, expression, failure_code);

 But how to control the groups now? Via Kconfig bool options?
 
 Yes, I think so. From some specialized Debug menu in the generic
 portion. We would need this to keep the (unused) debug code out of
 production systems.
 
  And what
 groups to define? Per subsytem? Or per disturbance level (latency
 regression)? If we control the group selection via Kconfig, we could
 define pseudo bool options like All debug groups or Low-intrusive
 debug groups that select the fitting concrete groups.

 
 We won't be able to anticipate on each and every debug spots we might
 need in the future, and in any case, debug triggers may well span
 multiple sub-systems. I'd go for defining levels depending on the
 throroughness/complexity of their checks.
 

To keep it simple:

XNDBG_LIGHT /* simple checks with low constant overhead */
XNDBG_HEAVY /* complex checks with high or unknown overhead */

Those two could become #defines and would have to be provide as first
argument to our debug macros.

Or we directly merge the attribute into the macro name:

XENO_DEBUG_LIGHT, XENO_IF_DEBUG_LIGHT(), XENO_ASSERT_LIGHT()
XENO_DEBUG_HEAVY, XENO_IF_DEBUG_HEAVY(), XENO_ASSERT_HEAVY()

 Alternatively, we could make the group selection a runtime switch,
 controlled via a global bitmask that can be modified through /proc e.g.
 Only switching of CONFIG_XENO_OPT_DEBUG would then remove all debugging
 code, otherwise the execution of the checks would depend on the current
 bitmask content.
 
 We could cumulate this with the static selection.
 

Then this is also a perfect add-on - for later work. :)

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] [PATCH] Shared interrupts (ready to merge)

2006-02-15 Thread Dmitry Adamushko

Hello everybody,

being inspired by successful results of tests conducted recently by Jan  team,
I'm presenting the final (yep, yet another final :) combo-patch.

The shirq support now is optional, so that 

CONFIG_XENO_OPT_SHIRQ_LEVEL - enables shirq for level-triggered interrupts;

CONFIG_XENO_OPT_SHIRQ_EDGE - -//- for edge-triggered ones.

I addressed all the remarks and now, IMHO, it's (hopefully) ready for merge.
-- Best regards,Dmitry Adamushko


shirq-combo.patch
Description: Binary data


shirq-KConfig.patch
Description: Binary data


ChangeLog.patch
Description: Binary data


Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Jeroen Van den Keybus
At second sight, the patches are ok.

I´ve boiled the problem down to a lack of EOI. If I do __ack_APIC_irq()
by hand after the desc-handler-end() has run, the system no
longer freezes.

I'm finding out why that is.

Jeroen.


Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Jeroen Van den Keybus
Ok. I´ve found it. The MSI interrupt type uses its end() handler to
acknowledge the interrupt using ack_APIC_irq() (drivers/pci/msi.c).
Xenomai uses the ack() handler to expedite the acknowledgement of an
IRQ. In case of MSI, ack() is a NOP.

The main problem is that Xenomai redefines ack_APIC_irq() calls (they
become NOPs, as defined in apic.h). Maybe the ISRs used so far never
issued ack_APIC_irq() themselves, and used always the IO-APIC (which
contains the correct __ack_APIC_irq() call) ?

I feel a bit awkward about changing msi.c .

Any opinions about how to change Xenomai / Linux ?



Jeroen.



Re: [Xenomai-core] [PATCH] provide rtdm_mmap_to_user / rtdm_munmap

2006-02-15 Thread Rodrigo Rosenfeld Rosas
Em Terça 14 Fevereiro 2006 22:30, Jan Kiszka escreveu:

...
 You cannot mmap before you know precisely for which user this should
 take place.

 Do you mean that if I use the 'current' and current-mm struct of the
 driver, when mmaping, the user won't be able to use the returned pointer?

To mmap you need to know the target process, more precisely its mm. This
is typically derived from the invocation context of the service call
(current is a pointer to the current process). But init_module runs in
the context of modprobe. Even worse, the process later opening and
mapping some buffer may not even exist at that time!

Right, I've already verified this on practice... I'm mmaping on open handler 
for now just for testing the mmap, but I'll change it to the ioctl mmap 
handler.

It seems to work. I mapped high_memory and could read and modify it from user 
space. The memory values mantained betweens the many open calls. I read, 
printed the values and increment them by one. On next time, the value shown 
was incremented... All seems perfect but I still didn't test with real 
acquire code... When I do so, I'll let you know.

I still need to test the vmaops. I think I'll test them tomorrow. I need to 
begin writing an article that my advisor asked me to. I need to finish it 
until march, 10.

Best Regards,

Rodrigo.


___
Yahoo! Acesso Grátis - Internet rápida e grátis. Instale o discador agora!
http://br.acesso.yahoo.com




Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Philippe Gerum

Jeroen Van den Keybus wrote:

It's definitely an Adeos issue and msi.c needs fixing. What about
this patch, do
things improve with it (against 2.6.15-ipipe-1.2-00)?

I going to try the patch later on. I have currently a ´fully 
instrumented´ kernel against which this patch would not ever work... I´m 
keeping that kernel for now, because I´m also investigating why MSI also 
doesn´t work under RTDM. It´s merely a coincidence that the above bug 
(MSI interrupts from Linux devices getting blocked) emerged and produced 
exactly the same behaviour (system hanging).


But, normally, that path is not used in RT mode, is it ? So something 
else is getting in the way.


At the first look of it, I´m a bit wary of touching that msi.c . I was 
rather thinking of kicking out __ack_APIC() altogether ? Or is that not 
possible ? (I see only problems in p4.c and smp.c - but I haven´t looked 
at these very closely.)





We do need __ack_APIC_irq() to run the actual APIC ack code all over the place in 
the APIC/IO-APIC support code, so that former regular uses of ack_APIC_irq() can 
be left untouched. Adeos already changes significant areas within Linux's innards 
in order to control its interrupt sub-system anyway, which in turn hides the gory 
details of interrupt prioritization to client software like Xenomai. 
drivers/pci/msi.c simply brings a new set of interrupt controllers we need to make 
Adeos-aware, just like it has been done for the i8259, the LAPIC and the IO-APIC 
supports.


--

Philippe.



Re: [Xenomai-core] Handling PCI MSI interrupts

2006-02-15 Thread Jeroen Van den Keybus

I´m also investigating
why MSI also doesn´t work under RTDM. It´s merely a coincidence that
the above bug (MSI interrupts from Linux devices getting blocked)
emerged and produced exactly the same behaviour (system hanging).
It turns out not to be coincidential. rtdm_irq_request() (through
passing iack=NULL to virtualize_irq()) uses the default Linux driver as
an acknowledgement routine for that interrupt. So fixing regular Linux
interrupts also fixed RTDM operation.

I'll have to sleep over the best solution in msi.c . For now, I have
implemented an __ack_APIC_irq() in an routine ack_msi_irq_wo_maskbit().
How do I make a patch for that ?

As for the bitmasked varieties, I need to be careful here. First I'll
have a look at the details of MSI with maskbits. Some of this stuff has
actually been devised to allow deferral of IRQ acknowledgement. I
wouldn't want to break that feature.

Anyway, with this simple fix, I'm finally able to use my Dell GX270 without IRQ sharing for the first time :-) .


Jeroen.