Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 09:58:34PM +0200, Jan Kiszka wrote:
> Sorry if I wasn't clear enough. I can only recommend to re-read the
> Xenomai code, specifically xnarch_switch_to (hint: clts, stts...), and
> then compare to the upstream kernel again and the commit I cited. It's
> probably best you figure this out yourself.

> >> Commit 304bceda6a upstream:
> >>
> >> x86, fpu: use non-lazy fpu restore for processors supporting xsave
> >> 
> >> Fundamental model of the current Linux kernel is to lazily init and
> >> restore FPU instead of restoring the task state during context switch.
> >> This changes that fundamental lazy model to the non-lazy model for
> >> the processors supporting xsave feature.

I do not think that you are not clear. The commit message is clear
enough and in line with how I have been seeing Linux switching FPU
context on x86 for years: it was switching out FPU context during
context switch to avoid SMP races, and switching in FPU context upon
FPU faults (at the very beginning, with some refinements as time
changed, with a counter that caused switching to eager after 5
context switches where the FPU had been used), hence "lazily restore
FPU" in the commit message; this was lazy switching. Xenomai saves
and restores unconditionally (if the XNFPU bit is set) the FPU
context at every context switch.

Look at Xenomai 2.1 code:
https://git.xenomai.org/xenomai-2.0.git/tree/nucleus/pod.c#n2139

This is __xnpod_switch_fpu, the function which switches the FPU
context.

And this function is called here:
https://git.xenomai.org/xenomai-2.0.git/tree/nucleus/pod.c#n2446

That is, in the middle of the context switch. No fault. Nothing
lazy. Ergo eager.

The particular instructions used (clts, stts) has nothing to do with
whether this is eager or not. Yes, clts and stts may be slow on new
high end hardware, but as I said, we do not care about new high end
hardware, because it has no valid reason whatsoever to be running a
dual kernel.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 09:04:55PM +0200, Jan Kiszka wrote:
> On 2016-08-01 19:33, Gilles Chanteperdrix wrote:
> > On Mon, Aug 01, 2016 at 07:18:36PM +0200, Jan Kiszka wrote:
> >> On 2016-08-01 16:05, Gilles Chanteperdrix wrote:
> >>> On Mon, Aug 01, 2016 at 02:58:54PM +0200, Jan Kiszka wrote:
> >>>> On 2016-08-01 14:35, Gilles Chanteperdrix wrote:
> >>>>> On Mon, Aug 01, 2016 at 01:29:46PM +0200, Henning Schild wrote:
> >>>>>> Hey Gilles,
> >>>>>>
> >>>>>> i just checked out the new release, which came as a surprise. Thanks
> >>>>>> for publishing that!
> >>>>>>
> >>>>>> Some of the patches prepare for kernel 4.0+ but one specifically makes
> >>>>>> sure the combination 4.0+ and 2.6.5 wont work.
> >>>>>>
> >>>>>> Am Sat, 9 Jul 2016 15:29:49 +0200
> >>>>>> schrieb Gilles Chanteperdrix :
> >>>>>>
> >>>>>> ...
> >>>>>>>   hal/x86: forbid compilation with Linux 4.0+
> >>>>>> ...
> >>>>>>
> >>>>>> Could you please provide details on how the FPU support is broken. I am
> >>>>>> successfully using xenomai 2.6 with 4.1.18 for some time now. I am not
> >>>>>> sure whether the applications on top use the FPU and if so, if there
> >>>>>> are multiple FPU-users per core.
> >>>>>
> >>>>> The FPU support is broken in the way it detects that Linux was using
> >>>>> FPU in kernel-space (for RAID, or memcpy on oldish AMD processors,
> >>>>> geode, K6, etc...) when Linux gets preempted. We can no longer rely
> >>>>> on checking the bit TS in CR0, and need instead to use an accessor
> >>>>> that was added in the I-pipe patch to know that. For details, see
> >>>>> the changes that were made to FPU support for x86 in Xenomai 3.x.
> >>>>>
> >>>>
> >>>> Are we doing eager switching there already? Would allow to use things
> >>>> as-is (i.e. without having to trap FPU accesses) on CPUs that are recent
> >>>> enough to do this switching lazily in hardware.
> >>>
> >>> The problem has nothing to do with trapping FPU accesses or eager
> >>> switching. Xenomai has always done eager switching. Xenomai 3 traps
> >>> fpu access in order to arm the XNFPU bit on first fpu use and then
> >>> does eager switching as usual.
> >>
> >> Eager means always switch on flipping the context, irrespective of the
> >> previous usage. There is no trapping of FPU usage anymore then. Hardware
> >> does this much faster today when using xsave. Therefore upstream moved
> >> away from the lazy pattern apparently also still used in Xenomai.
> > 
> > I know what eager means and Xenomai has always switched eagerly. But
> > the difference with Linux is that a Xenomai task has an XNFPU bit
> > indicating whether it wants to use the FPU or not. And obviously we
> > do not switch eagerly FPU for tasks which do not have the XNFPU bit.
> > 
> > Now, in Xenomai 2.x the XNFPU bit was systematically set for
> > user-space tasks, so that Xenomai always switched eagerly FPU for
> > user-space tasks. With Xenomai 3, the change I have made for x86 and
> > ARM is that a user-space task starts without the XNFPU bit, and if
> > it uses the FPU once, it gets the trap, the XNFPU is set, then it
> > gets eager switches forever after. So, it only gets a trap once.
> > Now, that means that you have to pay the price of a fault, once. So,
> > to do even better, Philippe has proposed to add the XNFPU bit to
> > pthread_set_mode_np/rt_task_set_mode so that a user-space task can
> > forcibly set the XNFPU bit.
> > 
> > But clearly, Xenomai switches FPU eagerly, and always has.
> > 
> > I am surprised to have to explain all this to you, I thought this
> > was common knowledge.
> > 
> 
> Commit 304bceda6a upstream:
> 
> x86, fpu: use non-lazy fpu restore for processors supporting xsave
> 
> Fundamental model of the current Linux kernel is to lazily init and
> restore FPU instead of restoring the task state during context switch.
> This changes that fundamental lazy model to the non-lazy model for
> the processors supporting xsave feature.
> 
> Reasons driving this model change are:
> 
> i. Newer processors supp

Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 07:18:36PM +0200, Jan Kiszka wrote:
> On 2016-08-01 16:05, Gilles Chanteperdrix wrote:
> > On Mon, Aug 01, 2016 at 02:58:54PM +0200, Jan Kiszka wrote:
> >> On 2016-08-01 14:35, Gilles Chanteperdrix wrote:
> >>> On Mon, Aug 01, 2016 at 01:29:46PM +0200, Henning Schild wrote:
> >>>> Hey Gilles,
> >>>>
> >>>> i just checked out the new release, which came as a surprise. Thanks
> >>>> for publishing that!
> >>>>
> >>>> Some of the patches prepare for kernel 4.0+ but one specifically makes
> >>>> sure the combination 4.0+ and 2.6.5 wont work.
> >>>>
> >>>> Am Sat, 9 Jul 2016 15:29:49 +0200
> >>>> schrieb Gilles Chanteperdrix :
> >>>>
> >>>> ...
> >>>>>   hal/x86: forbid compilation with Linux 4.0+
> >>>> ...
> >>>>
> >>>> Could you please provide details on how the FPU support is broken. I am
> >>>> successfully using xenomai 2.6 with 4.1.18 for some time now. I am not
> >>>> sure whether the applications on top use the FPU and if so, if there
> >>>> are multiple FPU-users per core.
> >>>
> >>> The FPU support is broken in the way it detects that Linux was using
> >>> FPU in kernel-space (for RAID, or memcpy on oldish AMD processors,
> >>> geode, K6, etc...) when Linux gets preempted. We can no longer rely
> >>> on checking the bit TS in CR0, and need instead to use an accessor
> >>> that was added in the I-pipe patch to know that. For details, see
> >>> the changes that were made to FPU support for x86 in Xenomai 3.x.
> >>>
> >>
> >> Are we doing eager switching there already? Would allow to use things
> >> as-is (i.e. without having to trap FPU accesses) on CPUs that are recent
> >> enough to do this switching lazily in hardware.
> > 
> > The problem has nothing to do with trapping FPU accesses or eager
> > switching. Xenomai has always done eager switching. Xenomai 3 traps
> > fpu access in order to arm the XNFPU bit on first fpu use and then
> > does eager switching as usual.
> 
> Eager means always switch on flipping the context, irrespective of the
> previous usage. There is no trapping of FPU usage anymore then. Hardware
> does this much faster today when using xsave. Therefore upstream moved
> away from the lazy pattern apparently also still used in Xenomai.

besides, optimizing the case of high-end does not really make sense
for Xenomai. We prefer to optimize the case of low-end, even if it
has a small cost for high-end. We target good worst case, remember?

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 07:18:36PM +0200, Jan Kiszka wrote:
> On 2016-08-01 16:05, Gilles Chanteperdrix wrote:
> > On Mon, Aug 01, 2016 at 02:58:54PM +0200, Jan Kiszka wrote:
> >> On 2016-08-01 14:35, Gilles Chanteperdrix wrote:
> >>> On Mon, Aug 01, 2016 at 01:29:46PM +0200, Henning Schild wrote:
> >>>> Hey Gilles,
> >>>>
> >>>> i just checked out the new release, which came as a surprise. Thanks
> >>>> for publishing that!
> >>>>
> >>>> Some of the patches prepare for kernel 4.0+ but one specifically makes
> >>>> sure the combination 4.0+ and 2.6.5 wont work.
> >>>>
> >>>> Am Sat, 9 Jul 2016 15:29:49 +0200
> >>>> schrieb Gilles Chanteperdrix :
> >>>>
> >>>> ...
> >>>>>   hal/x86: forbid compilation with Linux 4.0+
> >>>> ...
> >>>>
> >>>> Could you please provide details on how the FPU support is broken. I am
> >>>> successfully using xenomai 2.6 with 4.1.18 for some time now. I am not
> >>>> sure whether the applications on top use the FPU and if so, if there
> >>>> are multiple FPU-users per core.
> >>>
> >>> The FPU support is broken in the way it detects that Linux was using
> >>> FPU in kernel-space (for RAID, or memcpy on oldish AMD processors,
> >>> geode, K6, etc...) when Linux gets preempted. We can no longer rely
> >>> on checking the bit TS in CR0, and need instead to use an accessor
> >>> that was added in the I-pipe patch to know that. For details, see
> >>> the changes that were made to FPU support for x86 in Xenomai 3.x.
> >>>
> >>
> >> Are we doing eager switching there already? Would allow to use things
> >> as-is (i.e. without having to trap FPU accesses) on CPUs that are recent
> >> enough to do this switching lazily in hardware.
> > 
> > The problem has nothing to do with trapping FPU accesses or eager
> > switching. Xenomai has always done eager switching. Xenomai 3 traps
> > fpu access in order to arm the XNFPU bit on first fpu use and then
> > does eager switching as usual.
> 
> Eager means always switch on flipping the context, irrespective of the
> previous usage. There is no trapping of FPU usage anymore then. Hardware
> does this much faster today when using xsave. Therefore upstream moved
> away from the lazy pattern apparently also still used in Xenomai.

I know what eager means and Xenomai has always switched eagerly. But
the difference with Linux is that a Xenomai task has an XNFPU bit
indicating whether it wants to use the FPU or not. And obviously we
do not switch eagerly FPU for tasks which do not have the XNFPU bit.

Now, in Xenomai 2.x the XNFPU bit was systematically set for
user-space tasks, so that Xenomai always switched eagerly FPU for
user-space tasks. With Xenomai 3, the change I have made for x86 and
ARM is that a user-space task starts without the XNFPU bit, and if
it uses the FPU once, it gets the trap, the XNFPU is set, then it
gets eager switches forever after. So, it only gets a trap once.
Now, that means that you have to pay the price of a fault, once. So,
to do even better, Philippe has proposed to add the XNFPU bit to
pthread_set_mode_np/rt_task_set_mode so that a user-space task can
forcibly set the XNFPU bit.

But clearly, Xenomai switches FPU eagerly, and always has.

I am surprised to have to explain all this to you, I thought this
was common knowledge.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 02:58:54PM +0200, Jan Kiszka wrote:
> On 2016-08-01 14:35, Gilles Chanteperdrix wrote:
> > On Mon, Aug 01, 2016 at 01:29:46PM +0200, Henning Schild wrote:
> >> Hey Gilles,
> >>
> >> i just checked out the new release, which came as a surprise. Thanks
> >> for publishing that!
> >>
> >> Some of the patches prepare for kernel 4.0+ but one specifically makes
> >> sure the combination 4.0+ and 2.6.5 wont work.
> >>
> >> Am Sat, 9 Jul 2016 15:29:49 +0200
> >> schrieb Gilles Chanteperdrix :
> >>
> >> ...
> >>>   hal/x86: forbid compilation with Linux 4.0+
> >> ...
> >>
> >> Could you please provide details on how the FPU support is broken. I am
> >> successfully using xenomai 2.6 with 4.1.18 for some time now. I am not
> >> sure whether the applications on top use the FPU and if so, if there
> >> are multiple FPU-users per core.
> > 
> > The FPU support is broken in the way it detects that Linux was using
> > FPU in kernel-space (for RAID, or memcpy on oldish AMD processors,
> > geode, K6, etc...) when Linux gets preempted. We can no longer rely
> > on checking the bit TS in CR0, and need instead to use an accessor
> > that was added in the I-pipe patch to know that. For details, see
> > the changes that were made to FPU support for x86 in Xenomai 3.x.
> > 
> 
> Are we doing eager switching there already? Would allow to use things
> as-is (i.e. without having to trap FPU accesses) on CPUs that are recent
> enough to do this switching lazily in hardware.

The problem has nothing to do with trapping FPU accesses or eager
switching. Xenomai has always done eager switching. Xenomai 3 traps
fpu access in order to arm the XNFPU bit on first fpu use and then
does eager switching as usual.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 01:29:46PM +0200, Henning Schild wrote:
> Hey Gilles,
> 
> i just checked out the new release, which came as a surprise. Thanks
> for publishing that!
> 
> Some of the patches prepare for kernel 4.0+ but one specifically makes
> sure the combination 4.0+ and 2.6.5 wont work.
> 
> Am Sat, 9 Jul 2016 15:29:49 +0200
> schrieb Gilles Chanteperdrix :
> 
> ...
> >   hal/x86: forbid compilation with Linux 4.0+

Actually, the commit message is prefixed with hal/x86, so it only
makes sure that the combination 4.0+ and 2.6.5 does not work on x86.
It does not make sure that the combination does not work for other
architectures.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 02:03:51PM +0200, Henning Schild wrote:
> Am Mon, 1 Aug 2016 13:29:46 +0200
> schrieb Henning Schild :
> 
> > Hey Gilles,
> > 
> > i just checked out the new release, which came as a surprise. Thanks
> > for publishing that!
> > 
> > Some of the patches prepare for kernel 4.0+ but one specifically makes
> > sure the combination 4.0+ and 2.6.5 wont work.
> > 
> > Am Sat, 9 Jul 2016 15:29:49 +0200
> > schrieb Gilles Chanteperdrix :
> > 
> > ...
> > >   hal/x86: forbid compilation with Linux 4.0+  
> > ...
> > 
> > Could you please provide details on how the FPU support is broken. I
> > am successfully using xenomai 2.6 with 4.1.18 for some time now. I am
> > not sure whether the applications on top use the FPU and if so, if
> > there are multiple FPU-users per core.
> 
> In continous integration the switchtest FPU test never triggered, not
> in 64 nor in 32-bit mode.

That is probably because you have one of the two options known to
cause Linux to use FPU in kernel-space (raid or a K6 processor),
which prevents the switchtest test itself to use the FPU in Linux
kernel-space and test that we can preempt it in the middle of such
usage. If that is the case, your continuous integration tests kernel
logs should contain:

Warning: Linux is compiled to use FPU in kernel-space.
For this reason, switchtest can not test using FPU in Linux
kernel-space.

The kernels for continuous integration should be compiled without
raid or X86_USE_3DNOW support, so that switchtest can test
preempting Linux in the middle of using FPU in kernel-space, and if
the test work, it should normally be safe to enable RAID or
X86_USE_3DNOW in the production kernel.

If the test do not pass, and you enable, say RAID, on your
production kernel, and actually use RAID, you can expect data
corruption.

So, all in all, having that thing broken is better handled with a
#error than with silent data corruption.

There may be other ways around though, such as forcing Linux to use
integer implementation of RAID rather than FPU based version.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Announce] Xenomai 2.6.5

2016-08-01 Thread Gilles Chanteperdrix
On Mon, Aug 01, 2016 at 01:29:46PM +0200, Henning Schild wrote:
> Hey Gilles,
> 
> i just checked out the new release, which came as a surprise. Thanks
> for publishing that!
> 
> Some of the patches prepare for kernel 4.0+ but one specifically makes
> sure the combination 4.0+ and 2.6.5 wont work.
> 
> Am Sat, 9 Jul 2016 15:29:49 +0200
> schrieb Gilles Chanteperdrix :
> 
> ...
> >   hal/x86: forbid compilation with Linux 4.0+
> ...
> 
> Could you please provide details on how the FPU support is broken. I am
> successfully using xenomai 2.6 with 4.1.18 for some time now. I am not
> sure whether the applications on top use the FPU and if so, if there
> are multiple FPU-users per core.

The FPU support is broken in the way it detects that Linux was using
FPU in kernel-space (for RAID, or memcpy on oldish AMD processors,
geode, K6, etc...) when Linux gets preempted. We can no longer rely
on checking the bit TS in CR0, and need instead to use an accessor
that was added in the I-pipe patch to know that. For details, see
the changes that were made to FPU support for x86 in Xenomai 3.x.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 'select' for RTCAN sockets

2016-07-22 Thread Gilles Chanteperdrix
On Fri, Jul 22, 2016 at 09:25:26AM +, Alexey Gerasev wrote:
> On Thu, Jul 21, 2016 at 3:47 PM Gilles Chanteperdrix 
> wrote:
> 
> > On Thu, Jul 21, 2016 at 09:37:28AM +, Alexey Gerasev wrote:
> > > Thank you for your reply!
> > > I've tried to answer your questions and modify the patch.
> > >
> > > Sorry for the formatting of the massage - I've copied it from archive and
> > > formated manually because I've not received it in my mail.
> > >
> > > On Mon, Jul 18, 2016 at 11:39:47, Gilles Chanteperdrix wrote:
> > > > On Mon, Jul 18, 2016 at 09:28:50AM +, Alexey Gerasev wrote:
> > > > > Hello, everybody!
> > > > >
> > > > > I am trying to port existing program from Linux to Xenomai. This
> > program
> > > > > works with CAN devices via CAN raw sockets in single thread, and
> > > 'select'
> > > > > syscall is used for read/write multiplexing.
> > > > > I use Xenomai-3 branch 'stable-3.0.x'. When I try to use 'select'
> > > syscall
> > > > > on RTCAN sockets, it returns -1 and 'errno' is set to 19 - "No such
> > > > > device". I've found that there is no handler for 'select' event in
> > RTCAN
> > > > > driver in 'xenomai-3/kernel/drivers/can/rtcan_raw.c' in
> > 'rtcan_driver'
> > > at
> > > > > line 978.
> > > > > Does some fundamental reason for absence of 'rtcan_driver.select'
> > > handler
> > > > > for RTCAN sockets exist? Or it wasn't implemented just because
> > 'select'
> > > > > syscall is rarely used?
> > > > >
> > > > > I have also tried to implement 'rtcan_driver.select' handler by
> > myself.
> > > The
> > > > > patch is attached to this message.
> > > > > For XNSELECT_READ I use 'rtdm_sem_select' on socket semaphore like
> > it is
> > > > > used in RTNet TCP driver in
> > > > > 'xenomai-3/kernel/drivers/net/stack/ipv4/tcp/tcp.c' at line 2083. For
> > > > > XNSELECT_WRITE I also use 'rtdm_sem_select' on device TX semaphore
> > like
> > > in
> > > > > 'rtcan_raw_sendmsg' function.
> > > > > I have tested this implementation and it seems to be working.
> > > > > Please review it.
> > > > >
> > > > > Best regards,
> > > > > Alexey
> > > > > -- next part --
> > > > > A non-text attachment was scrubbed...
> > > > > Name: rtcan-select.patch
> > > >
> > > > Ok, please, next time, could you put the patch inline in your
> > > > e-mail? Anyway, I just have a comment:
> > > >
> > > > diff --git a/kernel/drivers/can/rtcan_raw.c
> > > b/kernel/drivers/can/rtcan_raw.c
> > > > index 693b927..8ecb5e7 100644
> > > > --- a/kernel/drivers/can/rtcan_raw.c
> > > > +++ b/kernel/drivers/can/rtcan_raw.c
> > > > @@ -964,6 +964,38 @@ ssize_t rtcan_raw_sendmsg(struct rtdm_fd *fd,
> > > >  return ret;
> > > >  }
> > > >
> > > > +/***
> > > > + *  rtcan_raw_select
> > > > + */
> > > > +static int rtcan_raw_select(struct rtdm_fd *fd,
> > > > +rtdm_selector_t *selector,
> > > > +enum rtdm_selecttype type,
> > > > +unsigned fd_index)
> > > > +{
> > > > +struct rtcan_socket *sock = rtdm_fd_to_private(fd);
> > > > +
> > > > +switch (type) {
> > > > +case XNSELECT_READ:
> > > > +return rtdm_sem_select(&sock->recv_sem, selector,
> > XNSELECT_READ,
> > > fd_index);
> > > > +case XNSELECT_WRITE:
> > > > +{
> > > > +struct rtcan_device *dev;
> > > > +int ifindex = 0;
> > > > +
> > > > +if (!(ifindex = atomic_read(&sock->ifindex)))
> > > > +return -ENXIO;
> > > >
> > > > Why do we need that? I mean, either the fd reference count is
> > > > sufficient for the interface to be linked unequivocally with the file
> > > > descriptor, and this dance is useless, or not, and the fact that you
> > > > use atomic_read will not make the race go aw

Re: [Xenomai] 'select' for RTCAN sockets

2016-07-21 Thread Gilles Chanteperdrix
On Thu, Jul 21, 2016 at 09:37:28AM +, Alexey Gerasev wrote:
> Thank you for your reply!
> I've tried to answer your questions and modify the patch.
> 
> Sorry for the formatting of the massage - I've copied it from archive and
> formated manually because I've not received it in my mail.
> 
> On Mon, Jul 18, 2016 at 11:39:47, Gilles Chanteperdrix wrote:
> > On Mon, Jul 18, 2016 at 09:28:50AM +, Alexey Gerasev wrote:
> > > Hello, everybody!
> > >
> > > I am trying to port existing program from Linux to Xenomai. This program
> > > works with CAN devices via CAN raw sockets in single thread, and
> 'select'
> > > syscall is used for read/write multiplexing.
> > > I use Xenomai-3 branch 'stable-3.0.x'. When I try to use 'select'
> syscall
> > > on RTCAN sockets, it returns -1 and 'errno' is set to 19 - "No such
> > > device". I've found that there is no handler for 'select' event in RTCAN
> > > driver in 'xenomai-3/kernel/drivers/can/rtcan_raw.c' in 'rtcan_driver'
> at
> > > line 978.
> > > Does some fundamental reason for absence of 'rtcan_driver.select'
> handler
> > > for RTCAN sockets exist? Or it wasn't implemented just because 'select'
> > > syscall is rarely used?
> > >
> > > I have also tried to implement 'rtcan_driver.select' handler by myself.
> The
> > > patch is attached to this message.
> > > For XNSELECT_READ I use 'rtdm_sem_select' on socket semaphore like it is
> > > used in RTNet TCP driver in
> > > 'xenomai-3/kernel/drivers/net/stack/ipv4/tcp/tcp.c' at line 2083. For
> > > XNSELECT_WRITE I also use 'rtdm_sem_select' on device TX semaphore like
> in
> > > 'rtcan_raw_sendmsg' function.
> > > I have tested this implementation and it seems to be working.
> > > Please review it.
> > >
> > > Best regards,
> > > Alexey
> > > -- next part --
> > > A non-text attachment was scrubbed...
> > > Name: rtcan-select.patch
> >
> > Ok, please, next time, could you put the patch inline in your
> > e-mail? Anyway, I just have a comment:
> >
> > diff --git a/kernel/drivers/can/rtcan_raw.c
> b/kernel/drivers/can/rtcan_raw.c
> > index 693b927..8ecb5e7 100644
> > --- a/kernel/drivers/can/rtcan_raw.c
> > +++ b/kernel/drivers/can/rtcan_raw.c
> > @@ -964,6 +964,38 @@ ssize_t rtcan_raw_sendmsg(struct rtdm_fd *fd,
> >  return ret;
> >  }
> >
> > +/***
> > + *  rtcan_raw_select
> > + */
> > +static int rtcan_raw_select(struct rtdm_fd *fd,
> > +rtdm_selector_t *selector,
> > +enum rtdm_selecttype type,
> > +unsigned fd_index)
> > +{
> > +struct rtcan_socket *sock = rtdm_fd_to_private(fd);
> > +
> > +switch (type) {
> > +case XNSELECT_READ:
> > +return rtdm_sem_select(&sock->recv_sem, selector, XNSELECT_READ,
> fd_index);
> > +case XNSELECT_WRITE:
> > +{
> > +struct rtcan_device *dev;
> > +int ifindex = 0;
> > +
> > +if (!(ifindex = atomic_read(&sock->ifindex)))
> > +return -ENXIO;
> >
> > Why do we need that? I mean, either the fd reference count is
> > sufficient for the interface to be linked unequivocally with the file
> > descriptor, and this dance is useless, or not, and the fact that you
> > use atomic_read will not make the race go away: ifindex may well
> > change between the time you read it, albeit atomically, and the time
> > you use it.
> 
> Sorry, I hardly understand. Does 'dance' mean atomic_read? If so, it's put
> here not to eliminate the race, but just to read int value from atomic_t.
> 
> I've got this way to access the device by socket from rtcan_raw.c
> in rtcan_raw_sendmsg. At line 802 I saw such comment:
> 
> /* We only want a consistent value here, a spin lock would be
> * overkill. Nevertheless, the binding could change till we have
> * the chance to send. Blame the user, though. */
> ifindex = atomic_read(&sock->ifindex);
> 
> As I understand in rtcan_raw_sendmsg there's no protection from
> socket rebinding, just 'blame the user'. So, in rtcan_raw_select
> such protection is unnecessary too, I think.

Indeed.

> 
> 
> > +
> > +if ((dev = rtcan_dev_get_by_index(ifindex)) == NULL)
> > +return -ENXIO;
&g

Re: [Xenomai] [RFC] [PATCH] Add C_CAN/D_CAN driver

2016-07-20 Thread Gilles Chanteperdrix
On Wed, Jul 20, 2016 at 05:23:14PM +, Andy Haydon wrote:
> Gilles Chanteperdrix  click-hack.org> writes:
> 
> > 
> > On Wed, Jul 20, 2016 at 02:10:27PM +, Haydon, Andrew wrote:
> > > Hi,
> > > 
> > > I have updated the C_CAN patch submitted by Stephen Battazzo:
> > > http://xenomai.org/pipermail/xenomai/2015-July/034690.html
> > > 
> > > This update is for Xenomai 3.0.x and also updates the driver to the one
> > > used in Linux v3.18.20. I've also backported a couple of bug fixes from
> > > later kernel versions. I haven't ported the PCI support because I have 
> no
> > > way of testing this.
> > > The RTDM port is largely the same as in Stephen's patch.
> > > 
> > > I'm not sure whether my use of "rtdm_task_busy_sleep" is correct. The
> > > Linux driver uses udelay here.
> > 
> > It is if not called in the middle of a section protected by a mutex,
> > spinlock or with irqs off. The fact that the hardware has large
> > latencies may be unavoidable, the fact that it causes the rest of
> > the system to experience high latencies is not acceptable however.
> > 
> 
> OK
> 
> > > 
> > > At the moment I have only done very limited testing on this driver in
> > > Xenomai, but it seems to work OK as far as I have tested it so far. I do
> > > know that the Linux driver works well for me using the D_CAN interface 
> on
> > > the Cyclone V.
> > 
> > Your patch is not indented correctly, as such it can not be
> > accepted, and it makes the review uselessly hard. So, please resend
> > a correctly indented patch.
> > 
> 
> Sorry - Outlook removed all the tabs
> Below is the patch with indentation:

I am afraid your MUA added line breaks, which makes the patch
unusable. Can not you use git send-email ?

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [RFC] [PATCH] Add C_CAN/D_CAN driver

2016-07-20 Thread Gilles Chanteperdrix
On Wed, Jul 20, 2016 at 02:10:27PM +, Haydon, Andrew wrote:
> Hi,
> 
> I have updated the C_CAN patch submitted by Stephen Battazzo:
> http://xenomai.org/pipermail/xenomai/2015-July/034690.html
> 
> This update is for Xenomai 3.0.x and also updates the driver to the one
> used in Linux v3.18.20. I've also backported a couple of bug fixes from
> later kernel versions. I haven't ported the PCI support because I have no
> way of testing this.
> The RTDM port is largely the same as in Stephen's patch.
> 
> I'm not sure whether my use of "rtdm_task_busy_sleep" is correct. The
> Linux driver uses udelay here.

It is if not called in the middle of a section protected by a mutex,
spinlock or with irqs off. The fact that the hardware has large
latencies may be unavoidable, the fact that it causes the rest of
the system to experience high latencies is not acceptable however.

> 
> At the moment I have only done very limited testing on this driver in
> Xenomai, but it seems to work OK as far as I have tested it so far. I do
> know that the Linux driver works well for me using the D_CAN interface on
> the Cyclone V.

Your patch is not indented correctly, as such it can not be
accepted, and it makes the review uselessly hard. So, please resend
a correctly indented patch.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] 'select' for RTCAN sockets

2016-07-18 Thread Gilles Chanteperdrix
On Mon, Jul 18, 2016 at 09:28:50AM +, Alexey Gerasev wrote:
> Hello, everybody!
> 
> I am trying to port existing program from Linux to Xenomai. This program
> works with CAN devices via CAN raw sockets in single thread, and 'select'
> syscall is used for read/write multiplexing.
> I use Xenomai-3 branch 'stable-3.0.x'. When I try to use 'select' syscall
> on RTCAN sockets, it returns -1 and 'errno' is set to 19 - "No such
> device". I've found that there is no handler for 'select' event in RTCAN
> driver in 'xenomai-3/kernel/drivers/can/rtcan_raw.c' in 'rtcan_driver' at
> line 978.
> Does some fundamental reason for absence of 'rtcan_driver.select' handler
> for RTCAN sockets exist? Or it wasn't implemented just because 'select'
> syscall is rarely used?
> 
> I have also tried to implement 'rtcan_driver.select' handler by myself. The
> patch is attached to this message.
> For XNSELECT_READ I use 'rtdm_sem_select' on socket semaphore like it is
> used in RTNet TCP driver in
> 'xenomai-3/kernel/drivers/net/stack/ipv4/tcp/tcp.c' at line 2083. For
> XNSELECT_WRITE I also use 'rtdm_sem_select' on device TX semaphore like in
> 'rtcan_raw_sendmsg' function.
> I have tested this implementation and it seems to be working.
> Please review it.
> 
> Best regards,
> Alexey
> -- next part --
> A non-text attachment was scrubbed...
> Name: rtcan-select.patch

Ok, please, next time, could you put the patch inline in your
e-mail? Anyway, I just have a comment:

diff --git a/kernel/drivers/can/rtcan_raw.c b/kernel/drivers/can/rtcan_raw.c
index 693b927..8ecb5e7 100644
--- a/kernel/drivers/can/rtcan_raw.c
+++ b/kernel/drivers/can/rtcan_raw.c
@@ -964,6 +964,38 @@ ssize_t rtcan_raw_sendmsg(struct rtdm_fd *fd,
 return ret;
 }
 
+/***
+ *  rtcan_raw_select
+ */
+static int rtcan_raw_select(struct rtdm_fd *fd,
+   rtdm_selector_t *selector,
+   enum rtdm_selecttype type,
+   unsigned fd_index)
+{
+struct rtcan_socket *sock = rtdm_fd_to_private(fd);
+
+switch (type) {
+   case XNSELECT_READ:
+   return rtdm_sem_select(&sock->recv_sem, selector, XNSELECT_READ, 
fd_index);
+   case XNSELECT_WRITE:
+   {
+   struct rtcan_device *dev;
+   int ifindex = 0;
+
+   if (!(ifindex = atomic_read(&sock->ifindex)))
+   return -ENXIO;

Why do we need that? I mean, either the fd reference count is
sufficient for the interface to be linked unequivocally with the file
descriptor, and this dance is useless, or not, and the fact that you
use atomic_read will not make the race go away: ifindex may well
change between the time you read it, albeit atomically, and the time
you use it.


+
+   if ((dev = rtcan_dev_get_by_index(ifindex)) == NULL)
+   return -ENXIO;

I believe you have taken a reference on the device here, you need
to return it after the call to rtdm_sem_select.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [RFC] Switchless RTDM calls from SCHED_WEAK threads

2016-07-10 Thread Gilles Chanteperdrix
On Sun, Jul 10, 2016 at 11:47:44AM +0200, Philippe Gerum wrote:
> 
> This RFC is a follow up to
> http://xenomai.org/pipermail/xenomai/2016-May/036253.html.
> 
> To sum up the issue, calling Cobalt's ioctl() implementation on a file
> descriptor pointing at a RTDM named device, or pointing at a regular
> character device, may create overhead for SCHED_WEAK threads, when/as
> the ioctl request should be processed from the ioctl_nrt handler. This
> is due to extra mode switches, illustrated as follows (Cobalt syscalls
> are enclosed by __RT(), glibc services by __STD()):
> 
> [Secondary mode]   <== mode switch ==>   [Primary mode]
> 
> app:__RT(ioctl(fd, ...))
>  |
>+-> driver:ioctl_rt
>  |
>  returns -ENOSYS |
>  |
> driver:ioctl_nrt <-+
> 
> Since SCHED_WEAK threads normally run in the Linux domain, and Cobalt
> always starts probing the ioctl_rt() routine for handling the request,
> a useless double mode switch happens in those particular cases.
> 
> The rationale behind probing the ioctl_rt() handler first, is that
> SCHED_WEAK threads are supposed to wait for events from, or share
> non-critical resources with real-time Cobalt threads and/or devices,
> and that normally happens with the help of the Cobalt scheduler, which
> requires the caller to run in primary mode. In short, this is the
> runtime scenario Cobalt favors because this the reason for running
> SCHED_WEAK threads in the first place.
> 
> This leads to some clarification: SCHED_WEAK is not meant to run plain
> regular POSIX threads that don't interface with the real-time
> sub-system; for such use case, one should call the regular
> __STD(pthread_create()) service to create the thread, not libcobalt's
> __RT(pthread_create()). Such thread would be able to issue RTDM
> ioctl() calls as well, ending up into the driver's ioctl_nrt() handler
> directly.
> 
> Back to the issue, we have three options for fixing the overhead
> described above for threads that actually need to run in the
> SCHED_WEAK class:
> 
> 1- Cobalt could figure out whether the incoming file descriptor is
>   actually managed by RTDM, before switching to primary mode if so.
>   This way, regular file descriptors would be rejected early, before
>   any mode switch is attempted, and libcobalt could hand over the
>   request to the (g)libc in such an event. The ioctl request that has
>   to be processed from a plain Linux context could then be handled by
>   a simple chardev driver.
> 
> 2- We could allow the application to tell Cobalt that RTDM I/O calls
>   issued on a given file descriptor are primarily directed to the
>   non-rt handler in the driver. Typically, an open mode flag such as
>   "O_WEAK" could do the job; such flag would affect requests issued
>   from SCHED_WEAK or regular threads unconditionally, or from
>   real-time threads provided no rt handler is defined by the
>   driver. In other cases, it would be ignored. This way, we would not
>   allow the application to shoot itself in the foot by bypassing the
>   RT handler inadvertently.
> 
>   e.g.:
> 
> fd = open(some_rtdm_device, O_RDWR | O_WEAK);
> ret = ioctl(fd, SOMEDEV_RTIOC_FOO, &opt_arg);
> 
>   Pros: Applicable to all RTDM I/O requests (ioctl/read/write).
> 
>   Cons: Breaks the ABI with the introduction of the O_WEAK open flag.
>   Complex semantics, given that SCHED_WEAK threads may behave
>   differently than members of real-time scheduling classes for the
>   same ioctl request on the same file descriptor.  Leaves the decision
>   about the best mode to run a request implemented by a driver to the
>   application, which seems odd.
> 
> 3- We could introduce a special tag for composing the RTIOC code of an
>   ioctl request, that a driver would use to state the preference for
>   running the request in relaxed mode. The existing adaptative switch
>   (ENOSYS) would still be available for handling requests for which no
>   preference has been defined.
> 
>   e.g.
> 
>   #define _IOC_RELAX  15U
>   #define _IOWRX(type, nr, size)  _IOWR(type, nr | (1U << _IOC_RELAX), 
> size)
> 
>   #define SOMEDEV_RTIOC_FOO   _IOWRX(RTDM_CLASS_BAR, 10, &some_arg)
> 
>   Pros: Semantics is easy to grasp: the decision about the best mode
>   to run any given request is left to the driver. It also implies the
>   best practice of assigning an exclusive handler to each ioctl
>   request, i.e. either _rt or _nrt, but not both.
> 
>   Cons: Breaks the ABI, by consuming a bit normally used to encode the
>   ioctl number for the tag, restricting the namespace to 2^15 codes.
>   Would not enable the mechanism for other RTDM I/O calls such as
>   read/write(_nrt).
> 
> An implementation of option #1 is available from wip/handover in the

[Xenomai] [Announce] Xenomai 2.6.5

2016-07-09 Thread Gilles Chanteperdrix
Hi,

you will find the latest release in Xenomai 2.6 branch here:
https://xenomai.org/downloads/xenomai/stable/xenomai-2.6.5.tar.bz2

It contains fixes for known bugs in the 2.6.4 release, now almost
two years old, notably:
- a scheduler bug, fixed by commit b03e3d08379f236eb75b34c1b705e127c6e3b2e5
https://git.xenomai.org/xenomai-2.6.git/commit/?id=b03e3d08379f236eb75b34c1b705e127c6e3b2e5
- a bug in ARM VFP support, fixed by commit 
d4e755b2a9909afb7bbd0a522ff1d97718494cd7
https://git.xenomai.org/xenomai-2.6.git/commit/?id=d4e755b2a9909afb7bbd0a522ff1d97718494cd7

Users of the Xenomai 2.6 branch are encouraged to upgrade. It is
expected to be the last release in the 2.6 branch. It supports all
I-pipe patches up to those for Linux version 3.18. The short log
follows.

Thanks to all contributors.

Gilles Chanteperdrix (45):
  debian: also include the config directory in the xenomai-kernel-source 
package
  hal: fixups for kernel 3.16
  hal/arm: fixup for Linux 3.16
  posix: fix user-space interrupt syscalls
  hal/arm: simplify fpu handling
  nucleus/registry: initialize vfile structure to 0
  hal/xnlock: also indicate file, line and function in unlock debugging
  can/flexcan: avoid dereferencing clocks twice
  nucleus/timers: fix the timers situation
  posix/clock_host_realtime: fix error handling
  posix: avoid dereferencing user-space address
  autotools: regenerate
  sigdebug: add SIGDEBUG_RESCNT_IMBALANCE
  posix/mutex: handle recursion count completely in user-space
  posix: fix pthread_once
  posix/mutex: avoid warnings
  posix/once: cosmetic fixes
  build: add .gitignore
  nucleus: replace do_munmap with vm_munmap
  nucleus/shadow: avoid tasklist_lock for find_task_by_pid
  hal/powerpc64: fix compilation with I-pipe patches >= 3.18
  nucleus/shadow: fix crash with debugs enabled
  testsuite/cond-torture: increase sleep duration
  testsuite/mutex-torture: increase sleep duration
  testsuite/mutex-torture: avoid race-condition
  nucleus/Kconfig: warn about CONFIG_MIGRATION
  hal/arm: fix VFP support
  testsuite/switchtest: detect more FPU errors.
  x86/fptest: do not use the same value for all the registers
  switchtest: set the registers before switching mode
  hal/x86: forbid compilation with Linux 4.0+
  switchtest: clarify the mode switch
  x86/fptest: add missing asm constraints
  switchtest: align kernel-space fpu register values
  posix/mutex,cond: allow static initializer
  doc: adapt to different installations of asciidoc and doxygen
  doc: fix generate-doc script after removal of docbook documentation
  posix/clock: remove documentation of internal function
  doc/sched: move sched documentation under nucleus section
  bump version number and bootstrap
  hal/arm: update patches
  hal/blackfin: update I-pipe patches
  hal/powerpc: update I-pipe patches
  hal/x86: update I-pipe patches
  doc: regenerate

Jeffrey Melville (2):
  posix/doc: Fix sched_get_priority(min|max) doc
  posix: fix pthread_mutex_timedlock for recursive mutexes.

Jorge Ramirez-Ortiz (2):
  drivers/analogy: fix detach logic
  drivers/analogy: remove unnecessary spinlock

Matthew Lindner (1):
  drivers/can: Properly initialize bittime

Philippe Gerum (14):
  native: fix user-space interrupt syscalls
  hal/arm: add specific calibration value for imx6q
  can/flexcan: fixup for kernel release >= 3.11
  arm/hal: silence C89 warning
  nucleus/pipe: care for spurious wakeups waiting for events
  nucleus: add default calibration value for imx7
  nucleus: fix calibration return for imx7
  nucleus/pod: fix missed rescheduling in SMP
  rtipc/bufp: fix wrong TX timeout
  hal/generic: add backward wrappers for legacy cpu mask ops
  scripts/prepare-kernel: allow building over 4.x kernel series
  ksrc, include: cope with introduction of user_msghdr
  testsuite/xeno-test-run: requesting /bin/sh is enough
  cobalt/x86: fix missing early clobber in asm

Stefan Roese (1):
  hal/arm: Add Zynq v3.14.17 patches

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling testsuite and utils

2016-07-08 Thread Gilles Chanteperdrix
On Fri, Jul 08, 2016 at 09:04:23PM -0300, Eder Alves de Moura wrote:
> Running:
> 
> xenomai-3-3.0.2/

The sources may be found here:
https://xenomai.org//downloads/xenomai/stable/xenomai-3.0.2.tar.bz2

When you extract them, the directory should be named
"xenomai-3.0.2".

You do not need to run autoreconf.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling testsuite and utils

2016-07-08 Thread Gilles Chanteperdrix
On Fri, Jul 08, 2016 at 05:47:40PM -0300, Eder Alves de Moura wrote:
> - do you really want to compile a 32 bits version of Xenomai?
> 
> no. I am not want to compile it to a specific 32 bits.

The run configure without a --host or CFLAGS, or LDFLAGS. The
defaults should be fine.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling testsuite and utils

2016-07-08 Thread Gilles Chanteperdrix
On Fri, Jul 08, 2016 at 10:34:05PM +0200, Gilles Chanteperdrix wrote:
> On Fri, Jul 08, 2016 at 05:28:48PM -0300, Eder Alves de Moura wrote:
> > Dear Gilles,
> 
> Dear Eder,
> 
> next private mail will not be answered.
> 
> > 
> > I just trying to run it for the first time, I was just following the
> > tutorial (https://xenomai.org/installing-xenomai-3-x/) on the section
> > "Examples of building the Xenomai libraries and tools" and  after some
> > trials I unziped again the xenomai folder again and I  ran the command
> > "$ ../configure --with-core=cobalt --enable-smp --enable-pshared
> > --host=i686" on build directory inside the xenomai and it returned to
> > me:

> 
> Ok. This is a different issue. You are not answering my questions
> though. So I am going to repeat them:
> - do you really want to compile a 32 bits version of Xenomai?
> - if yes, did you check that you have everything in place to do it,
> by compiling a C language hello world command with gcc -m32?

a C language hello world program, I mean.

> 
> Until you have answered these questions, any later attempt at
> compiling xenomai is useless.

I mean, please answer these questions, the right arguments to pass
to configure depend on the answer.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling testsuite and utils

2016-07-08 Thread Gilles Chanteperdrix
On Fri, Jul 08, 2016 at 05:28:48PM -0300, Eder Alves de Moura wrote:
> Dear Gilles,

Dear Eder,

next private mail will not be answered.

> 
> I just trying to run it for the first time, I was just following the
> tutorial (https://xenomai.org/installing-xenomai-3-x/) on the section
> "Examples of building the Xenomai libraries and tools" and  after some
> trials I unziped again the xenomai folder again and I  ran the command
> "$ ../configure --with-core=cobalt --enable-smp --enable-pshared
> --host=i686" on build directory inside the xenomai and it returned to
> me:

Ok. This is a different issue. You are not answering my questions
though. So I am going to repeat them:
- do you really want to compile a 32 bits version of Xenomai?
- if yes, did you check that you have everything in place to do it,
by compiling a C language hello world command with gcc -m32?

Until you have answered these questions, any later attempt at
compiling xenomai is useless.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Compiling testsuite and utils

2016-07-08 Thread Gilles Chanteperdrix
On Fri, Jul 08, 2016 at 04:30:17PM -0300, Eder Alves de Moura wrote:
> Dears,
> 
> I am starting with Xenomai and rt systems and I want to compile it for
> a raspberry pi 2/3 and beaglebone black but first I compiled it to a
> PC and, apparently it is working fine.
> 
> $ dmesg | grep -i xenomai
> [3.224489] [Xenomai] scheduling class idle registered.
> [3.224491] [Xenomai] scheduling class rt registered.
> [3.224515] [Xenomai] disabling automatic C1E state promotion on
> Intel processor
> [3.224521] [Xenomai] SMI-enabled chipset found, but SMI workaround 
> disabled
>  (see xenomai.smi parameter). You might encounter
> [3.224592] I-pipe: head domain Xenomai registered.
> [3.224916] [Xenomai] Cobalt v3.0.2 (Exact Zero)
> 
> 
> 
> But after install I could not compile the tools for xenomai, running
> 
> $ mkdir $build_root && cd $build_root
> $ $xenomai_root/configure --with-core=cobalt --enable-smp --enable-pshared \
>   --host=i686-linux CFLAGS="-m32 -O2" LDFLAGS="-m32"
> 
> I am receiving the following message
> 
> checking whether we build for Cobalt or Mercury core... cobalt
> checking build system type... x86_64-pc-linux-gnu
> checking host system type... i686-pc-linux-gnu
> checking for a BSD-compatible install... /usr/bin/install -c
> checking for i686-linux-gcc... no
> checking for gcc... gcc
> checking whether the C compiler works... no
> configure: error: in `/home/pc/tools/xenomai-3-3.0.2':
> configure: error: C compiler cannot create executables
> See `config.log' for more details

Are you sure you have everything installed for gcc -m32 to work? I
mean, did you try to compile an "hello world" application with gcc
-m32 to see if the compilation works? Also, what is the point of
compiling in 32 bits mode, why not using a 64 bits kernel and
user-space?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] unable to patch

2016-07-08 Thread Gilles Chanteperdrix
On Fri, Jul 08, 2016 at 10:29:43AM +, praneet...@yahoo.com wrote:
> Hi, 
> Having failed to patch the kernel for arm,I tried with Linux kernel 
> 3.0.43,3.14,3.18 4.20 with xenomai 2.6.4's adeos ipipe as well as ipipe-core. 
> Unfortunately most of them ended with the following last
> messages. unable to patch the kernel 3.0.43 with adeos -ipipe-3.0.43  

I guess you do not get a message about kernel 3.0.43 when trying to
apply a patch on 3.14. Anyway, this message means that you are
trying to apply a patch to a kernel which is not the kernel it was
made for. 

Basically, this is explained in the file ksrc/arch/arm/patches/README
in the source tree:

- the patches directly in this directory apply to mainline kernels,
the ones from kernel.org, with the exact same version as the version
in the I-pipe patch name (so for instance,
ipipe-core-3.14.17-arm-4.patch applies to the vanilla kernel
3.14.17, probably not to 3.14.0 or 3.14.73)

- for the case of patches to the mainline kernel you are encouraged to
use more recent versions from https://xenomai.org//downloads/ipipe
area, but only for versions of the kernel supported by Xenomai, so
for instance  ipipe-core-3.14.44-arm-17.patch is a better choice
than ipipe-core-3.14.17-arm-4.patch (which ships with Xenomai
2.6.4), but you can not use ipipe-core-3.18.20-arm-10.patch or 
ipipe-core-4.1.18-arm-5.patch, because Xenomai does not support
Linux versions after Linux 3.14. But any version in the stable
branch (3.14.44 for instance) should be OK.

- the patches in the sub-directories (beaglebone, mxc, raspberry,
zynq) apply to specific vendor forks, which exactly should be
documented in the README file. Sometimes they are pre or post
patches meaning that you have to apply them before or after a patch
for a mainline kernel, the details are given in the README file.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Serial port problem

2016-07-07 Thread Gilles Chanteperdrix
On Thu, Jul 07, 2016 at 02:40:03PM +0200, marc favereau wrote:
> when i read the driver code source (16550A.c) i think there is a problem
> with 'rtdm_irq_request' function. It's like the IRQ was not shared between
> the 4 ports of the card !!!
> 
> I don't understand why !

Do you have CONFIG_XENO_OPT_SHIRQ enabled in the kernel configuration?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTnet - List corruption when creating packet socket two consecutive times

2016-07-04 Thread Gilles Chanteperdrix
On Mon, Jul 04, 2016 at 05:00:38PM +0200, Geoffrey BONNEVILLE wrote:
> Hi,
> 
> I can confirm that your patch fixes the problem. Thank you !

Without loading a drivers which has rtskb_map/unmap, I knew that.
The question is whether it does not cause issue on igb and e1000e,
the two drivers which define these functions.
-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Pb with 4 serial ports Moxa card

2016-07-04 Thread Gilles Chanteperdrix
On Fri, Jul 01, 2016 at 02:30:49PM +0200, marc favereau wrote:
> I don't understand why the driver is on /sys/bus/pci/drivers and not on
> /sys/bus/pci_express/drivers  ???

Because that is the way Linux works. This has nothing to do with
Xenomai.

> ?!?!?!?!
> 
> Why 5 rtser instead of 1 ???
> 
> and why no error ???

I guess the answers to all those questions are in the driver code.
Xenomai is free software, you are free to read, and even modify its
code.
-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTnet - List corruption when creating packet socket two consecutive times

2016-07-02 Thread Gilles Chanteperdrix
On Fri, Jul 01, 2016 at 04:31:58PM +0200, Geoffrey BONNEVILLE wrote:
>  
> 
> Hi, 
> 
> Tested with Xenomai 3.0.2 (Cobalt) on a 4.1 kernel. Both on a
> 32-bits ARM and a 32 bits x86 platform. 
> 
> I get a list corruption
> warning when I try to open()/close() one packet socket, two (or more)
> consecutive times. 
> 
> I think the problem is not limited to packet
> sockets. 
> 
> Steps to reproduce: 
> 
> - Use a kernel with CONFIG_DEBUG_LIST=y
> 
> 
> - Load rtnet.ko and rtpacket.ko 
> 
> - Execute a program which open() and
> close() a packet socket. (see test_socket.c below)

Hi,

had a look finally. First of all this test does not correspond to a
realistic use case, at least with RTnet: users are not expected to
destroy sockets before they have installed network drivers.
Obviously, a socket with such a short lifetime has not been very
useful. And this problem does not happen if you load a driver which
has "rtskb_map/rtskb_unmap" methods, that is, that does DMA mapping
of rtskbs.

However, only two drivers have these methods, e1000e and igb, and
all others do not have them, so the problem you report could happen in
realistic situations. The patch at the end of the mail seems to fix it.
Unfortunately, I did not test it with e1000e and igb drivers to see
if it does not break them, so I am not going to merge it for the
time being.

Regards.

diff --git a/kernel/drivers/net/stack/include/rtskb.h 
b/kernel/drivers/net/stack/include/rtskb.h
index 7489aa6..fdb6294 100644
--- a/kernel/drivers/net/stack/include/rtskb.h
+++ b/kernel/drivers/net/stack/include/rtskb.h
@@ -145,6 +145,7 @@ cap_rtmac_stamp field now contains valid data.
 #define RTSKB_CAP_RTMAC_STAMP   2   /* cap_rtmac_stamp is valid */
 
 #define RTSKB_UNMAPPED  0
+#define RTSKB_UNMAPPED_INLIST  1
 
 struct rtskb_queue;
 struct rtsocket;
diff --git a/kernel/drivers/net/stack/rtdev.c b/kernel/drivers/net/stack/rtdev.c
index 5eb73ce..940b8ff 100644
--- a/kernel/drivers/net/stack/rtdev.c
+++ b/kernel/drivers/net/stack/rtdev.c
@@ -379,6 +379,7 @@ static int rtskb_map(struct rtnet_device *rtdev, struct 
rtskb *skb)
return -ENOMEM;
 
 if (skb->buf_dma_addr != RTSKB_UNMAPPED &&
+   skb->buf_dma_addr != RTSKB_UNMAPPED_INLIST &&
addr != skb->buf_dma_addr) {
printk("RTnet: device %s maps skb differently than others. "
   "Different IOMMU domain?\nThis is not supported.\n",
@@ -412,8 +413,11 @@ int rtdev_map_rtskb(struct rtskb *skb)
}
 }
 
-if (!err)
+if (!err) {
+   if (skb->buf_dma_addr == RTSKB_UNMAPPED)
+   skb->buf_dma_addr = RTSKB_UNMAPPED_INLIST;
list_add(&skb->entry, &rtskb_list);
+}
 
 mutex_unlock(&rtnet_devices_nrt_lock);
 
@@ -453,13 +457,14 @@ void rtdev_unmap_rtskb(struct rtskb *skb)
 
 list_del(&skb->entry);
 
-for (i = 0; i < MAX_RT_DEVICES; i++) {
-   rtdev = rtnet_devices[i];
-   if (rtdev && rtdev->unmap_rtskb) {
-   rtdev->unmap_rtskb(rtdev, skb);
+if (skb->buf_dma_addr != RTSKB_UNMAPPED_INLIST) {
+   for (i = 0; i < MAX_RT_DEVICES; i++) {
+   rtdev = rtnet_devices[i];
+   if (rtdev && rtdev->unmap_rtskb) {
+   rtdev->unmap_rtskb(rtdev, skb);
+   }
}
 }
-
 skb->buf_dma_addr = RTSKB_UNMAPPED;
 
 mutex_unlock(&rtnet_devices_nrt_lock);


-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTnet - List corruption when creating packet socket two consecutive times

2016-07-01 Thread Gilles Chanteperdrix
On Fri, Jul 01, 2016 at 04:31:58PM +0200, Geoffrey BONNEVILLE wrote:
>  
> 
> Hi, 
> 
> Tested with Xenomai 3.0.2 (Cobalt) on a 4.1 kernel. Both on a
> 32-bits ARM and a 32 bits x86 platform. 

I guess the important question is: what driver do you use? Maybe the
fact that the driver uses DMA makes a difference. 

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTnet - List corruption when creating packet socket two consecutive times

2016-07-01 Thread Gilles Chanteperdrix
On Fri, Jul 01, 2016 at 04:31:58PM +0200, Geoffrey BONNEVILLE wrote:
>  
> 
> Hi, 
> 
> Tested with Xenomai 3.0.2 (Cobalt) on a 4.1 kernel. Both on a
> 32-bits ARM and a 32 bits x86 platform. 
> 
> I get a list corruption
> warning when I try to open()/close() one packet socket, two (or more)
> consecutive times. 
> 
> I think the problem is not limited to packet
> sockets. 
> 
> Steps to reproduce: 
> 
> - Use a kernel with CONFIG_DEBUG_LIST=y
> 
> 
> - Load rtnet.ko and rtpacket.ko 
> 
> - Execute a program which open() and
> close() a packet socket. (see test_socket.c below)

I am afraid I still not had time to work on rtnet. So, if you have
patch that fixes the issue, I will merge it, but I can not do more
before some time.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Pb with 4 serial ports Moxa card

2016-07-01 Thread Gilles Chanteperdrix
On Fri, Jul 01, 2016 at 12:25:07PM +0200, marc favereau wrote:
> Hi,
> 
> First i'm sorry for my poor level in english !!!
> 
> I will try to explain my problem.
> 
> I have to use a MOXA CP-134 card to communicate with a device (1 port for
> writing and 1 port for reading) in RS422.
> I'm using Xenomai 2.6.3 on a kernel linux 3.8.13 installed on a Xeon
> E5-2430 multicore desktop.
> I plug the card on a PCIe slot, that's why i configure kernel options like
> this :
> 
> Real-time sub-system->Drivers->Serial Drivers->16550A UART driver as a
> Module
> Real-time sub-system->Drivers->Serial Drivers->Hardware acces mode = Any
> acces mode
> Real-time sub-system->Drivers->Serial Drivers->PCI board support *
> Real-time sub-system->Drivers->Serial Drivers->Moxa PCI boards *
> 
> 
> I show you my commands :
> 
> 
> 
> [root@BedyCalculo1 favereau]# ll /proc/xenomai/rtdm/
> total 0
> dr-xr-xr-x. 2 root root 0  1 juil. 11:08 drvIT400_xeno
> -r--r--r--. 1 root root 0  1 juil. 11:08 fildes
> -r--r--r--. 1 root root 0  1 juil. 11:08 named_devices
> -rw-r--r--. 1 root root 0  1 juil. 11:08 open_fildes
> -r--r--r--. 1 root root 0  1 juil. 11:08 protocol_devices
> dr-xr-xr-x. 2 root root 0  1 juil. 11:08 rtipc
> dr-xr-xr-x. 2 root root 0  1 juil. 11:08 rttest-switchtest0
> dr-xr-xr-x. 2 root root 0  1 juil. 11:08 rttest-timerbench0
> 
> That's OK, i don't see rtser...
> 
> 
> [root@BedyCalculo1 favereau]# lsmod | grep -i -a3 mxser
> ptp18413  1 igb
> pps_core   18854  1 ptp
> dca14601  1 igb
> mxser  43739  0
> e1000e240573  0
> parport40425  2 ppdev,parport_pc
> i2c_i801   18136  0
> 
> That's OK, I have a correct linux driver for serial ports
> 
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS0
> /dev/ttyS0, UART: unknown, Port: 0x03f8, IRQ: 4
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS1
> /dev/ttyS1, UART: unknown, Port: 0x02f8, IRQ: 3
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS2
> /dev/ttyS2, UART: unknown, Port: 0x03e8, IRQ: 4
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS3
> /dev/ttyS3, UART: unknown, Port: 0x02e8, IRQ: 3
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS4
> /dev/ttyS4, UART: 16550A, Port: 0x90a0, IRQ: 17
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS5
> /dev/ttyS5, UART: 16650V2, Port: 0x8030, IRQ: 40
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS6
> /dev/ttyS6, UART: 16650V2, Port: 0x8020, IRQ: 44
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS7
> /dev/ttyS7, UART: unknown, Port: 0x, IRQ: 0
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS8
> /dev/ttyS8, UART: unknown, Port: 0x, IRQ: 0
> [root@BedyCalculo1 favereau]# setserial /dev/ttyS9
> /dev/ttyS9, UART: unknown, Port: 0x, IRQ: 0
> 
> That's OK, I have my system serial ports
> 
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI0
> /dev/ttyMI0, UART: 16550A, Port: 0x6100, IRQ: 19
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI1
> /dev/ttyMI1, UART: 16550A, Port: 0x6108, IRQ: 19
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI2
> /dev/ttyMI2, UART: 16550A, Port: 0x6110, IRQ: 19
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI3
> /dev/ttyMI3, UART: 16550A, Port: 0x6118, IRQ: 19
> 
> That's OK, I have the 4 MOXA card ports, with irq and adresses.
> 
> Now, i try to install xenomai driver :
> 
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI0 uart none
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI1 uart none
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI2 uart none
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI3 uart none
> 
> 
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI0
> /dev/ttyMI0, UART: unknown, Port: 0x6100, IRQ: 19
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI1
> /dev/ttyMI1, UART: unknown, Port: 0x6108, IRQ: 19
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI2
> /dev/ttyMI2, UART: unknown, Port: 0x6110, IRQ: 19
> [root@BedyCalculo1 favereau]# setserial /dev/ttyMI3
> /dev/ttyMI3, UART: unknown, Port: 0x6118, IRQ: 19
> 
> 
> 
> [root@BedyCalculo1 favereau]# modprobe xeno_16550A io=6100,6108,6110,6118
> irq=19,19,19,19 baud_base=921600,921600,921600,921600
> modprobe: ERROR: could not insert 'xeno_16550A': Device or resource busy
> 
> 
> ??
> I don't understand why i have this error.
> So i try to configure only 1 port :
> 
> 
> [root@BedyCalculo1 favereau]# modprobe xeno_16550A io=6100 irq=19
> baud_base=921600
> 
> [root@BedyCalculo1 favereau]# ll /proc/xenomai/rtdm/
> total 0
> dr-xr-xr-x. 2 root root 0  1 juil. 11:52 drvIT400_xeno
> -r--r--r--. 1 root root 0  1 juil. 11:52 fildes
> -r--r--r--. 1 root root 0  1 juil. 11:52 named_devices
> -rw-r--r--. 1 root root 0  1 juil. 11:52 open_fildes
> -r--r--r--. 1 root root 0  1 juil. 11:52 protocol_devices
> dr-xr-xr-x. 2 root root 0  1 juil. 11:52 rtipc
> dr-xr-xr-x. 2 root root 0  1 juil. 11:52 rtser0
> dr-xr-xr-x. 2 root root 0  1 juil. 11:52 rttest-switchtest0
> dr-

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-30 Thread Gilles Chanteperdrix
On Thu, Jun 30, 2016 at 11:17:59AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 16:42 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 04:32:17PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 14:01 schrieb Gilles Chanteperdrix:
> >>> On Tue, Jun 28, 2016 at 01:55:27PM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-28 um 12:39 schrieb Gilles Chanteperdrix:
> >>>>> On Tue, Jun 28, 2016 at 12:31:42PM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-28 um 12:19 schrieb Gilles Chanteperdrix:
> >>>>>> min: 10, max: 677, avg: 10.5048 -> 0.0265273 us
> >>>>>>
> >>>>>> Here are the output for Kernel 3.0.43 and Xenomai 2.6.2.1
> >>>>>>
> >>>>>> #> ./tsc
> >>>>>> min: 10, max: 667, avg: 11.5755 -> 0.029231 us
> >>>>> Ok. So, first it confirms that the two configurations are running
> >>>>> the processor at the same frequency. But we seem to see a pattern,
> >>>>> the maxima in the case of the new kernel seems consistently higher.
> >>>>> Which would suggest that there is some difference in the cache. What
> >>>>> is the status of the two configurations with regard to the L2 cache
> >>>>> write allocate policy?
> >>>> Do you mean the configuration we checked in this request
> >>>> https://xenomai.org/pipermail/xenomai/2016-June/036390.html
> >>> This answer is based on a kernel message, which may happen before
> >>> or after the I-pipe patch has changed the value passed to the
> >>> register, so, essentially, it is useless. I would not call that
> >>> checking the L2 cache configuration differences.
> >>>
> >> I readed the values from the auxiliary control register,
> >> when the system is up and running.
> >> I get the same values like I see in the Kernel log.
> >>
> >> Kernel 3.10.53 [0xa02104]=0x32c5
> >> Kernel 3.0.43[0xa02104]=0x285
> > Ok, so, if I read this correctly both values have 0x80 set,
> > which means "force no allocate", and is what we want. But there are
> > a lot of other questions in my answer which you avoid to answer (and
> > note that that one was only relevant in one of two cases, which I
> > believe is not yours).
> >
> Dear Gilles,

Hi,

> 
> your first intention was correct, that the L2 cache configuration may be
> the reason for our issue.
> I disabled the instruction and data prefetching and my customer application
> is as fast as in our old kernel.

I thought you said the contrary in this mail:
https://xenomai.org/pipermail/xenomai/2016-June/036390.html

Well not exactly the contrary, but that you tried to enable prefetching
with 3.0.43 and that performance did not degrade.

> It was a change in the Kernel file arch/arm/mach-imx/system.c where the 
> prefetching
> was activated.
> 
> We will additional replace the function rt_timer_tsc() by __xn_rdtsc() 
> as you recommended.

No, do not do that. __xn_rdtsc() is a Xenomai internal function. The
"tsc" test uses it because it is a test measuring the execution time
of this function. What I said was to replace __xn_rdtsc() with
rt_timer_tsc() in the "tsc" test, to see if there was no performance
regression in rt_timer_tsc().

Also, I would be curious to understand why the execution time of
__xn_rdtsc() changed, it is just one processor cycle, so, it should not
matter to applications, but still, I do not see what change could
cause this.

> 
> In our customer applications every millisecond the xenomai task with 
> priority 95
> is called and works on different objects that are located on different 
> memory locations.
> When the objects are finished we leave the xenomai domain and let work 
> Linux.
> 
> Do you have any additional hints for me what configrations L2 cache or 
> other that can
> speed up this use case ?

Well, no, I know nothing about the L2 cache configuration. A
customer told me that disabling write allocate was improving the
latency test results greatly on imx6, and I benchmarked it on OMAP4,
another processor based on cortex A9, you can find the benchmark
here:
https://xenomai.org/2014/08/benchmarks-xenomai-dual-kernel-over-linux-3-14/#For_the_Texas_Instrument_Panda_board_running_a_TI_OMAP4430_processor_at_1_GHz

Since it seemed to improve the performance on all the processors
Xenomai supported at the time with this L2 cache (i.e. omap4 and
imx6 really), we made the change in the I-pipe patch with a kernel
parameter to disable it, in case someone would prefer to disable it.

> 
> Thanks a lot for you support and patience. 
> <http://dict.leo.org/ende/index_de.html#/search=patience&searchLoc=0&resultOrder=basic&multiwordShowSingle=on&pos=0>

Well, I am not always patient. But you are welcome.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 04:32:17PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 14:01 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 01:55:27PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 12:39 schrieb Gilles Chanteperdrix:
> >>> On Tue, Jun 28, 2016 at 12:31:42PM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-28 um 12:19 schrieb Gilles Chanteperdrix:
> >>>> min: 10, max: 677, avg: 10.5048 -> 0.0265273 us
> >>>>
> >>>> Here are the output for Kernel 3.0.43 and Xenomai 2.6.2.1
> >>>>
> >>>> #> ./tsc
> >>>> min: 10, max: 667, avg: 11.5755 -> 0.029231 us
> >>> Ok. So, first it confirms that the two configurations are running
> >>> the processor at the same frequency. But we seem to see a pattern,
> >>> the maxima in the case of the new kernel seems consistently higher.
> >>> Which would suggest that there is some difference in the cache. What
> >>> is the status of the two configurations with regard to the L2 cache
> >>> write allocate policy?
> >> Do you mean the configuration we checked in this request
> >> https://xenomai.org/pipermail/xenomai/2016-June/036390.html
> > This answer is based on a kernel message, which may happen before
> > or after the I-pipe patch has changed the value passed to the
> > register, so, essentially, it is useless. I would not call that
> > checking the L2 cache configuration differences.
> >
> I readed the values from the auxiliary control register,
> when the system is up and running.
> I get the same values like I see in the Kernel log.
> 
> Kernel 3.10.53 [0xa02104]=0x32c5
> Kernel 3.0.43[0xa02104]=0x285

Ok, so, if I read this correctly both values have 0x80 set,
which means "force no allocate", and is what we want. But there are
a lot of other questions in my answer which you avoid to answer (and
note that that one was only relevant in one of two cases, which I
believe is not yours).

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 01:55:27PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 12:39 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 12:31:42PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 12:19 schrieb Gilles Chanteperdrix:
> >> min: 10, max: 677, avg: 10.5048 -> 0.0265273 us
> >>
> >> Here are the output for Kernel 3.0.43 and Xenomai 2.6.2.1
> >>
> >> #> ./tsc
> >> min: 10, max: 667, avg: 11.5755 -> 0.029231 us
> > Ok. So, first it confirms that the two configurations are running
> > the processor at the same frequency. But we seem to see a pattern,
> > the maxima in the case of the new kernel seems consistently higher.
> > Which would suggest that there is some difference in the cache. What
> > is the status of the two configurations with regard to the L2 cache
> > write allocate policy?
> Do you mean the configuration we checked in this request
> https://xenomai.org/pipermail/xenomai/2016-June/036390.html

This answer is based on a kernel message, which may happen before
or after the I-pipe patch has changed the value passed to the
register, so, essentially, it is useless. I would not call that
checking the L2 cache configuration differences.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 01:45:59PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 12:39 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 12:31:42PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 12:19 schrieb Gilles Chanteperdrix:
> >> min: 10, max: 677, avg: 10.5048 -> 0.0265273 us
> >>
> >> Here are the output for Kernel 3.0.43 and Xenomai 2.6.2.1
> >>
> >> #> ./tsc
> >> min: 10, max: 667, avg: 11.5755 -> 0.029231 us
> > Ok. So, first it confirms that the two configurations are running
> > the processor at the same frequency. But we seem to see a pattern,
> > the maxima in the case of the new kernel seems consistently higher.
> > Which would suggest that there is some difference in the cache. What
> > is the status of the two configurations with regard to the L2 cache
> > write allocate policy? Could you show us the tsc results of Xenomai
> > 2.6.4 with the 3.0 kernel ?
> As requested I created a Kernel 3.0.43 with Xenomai 2.6.4
> #> dmesg | grep "Linux version"
> Linux version 3.0.43 (netwol@DSWUB001) (gcc version 4.7.2 (GCC) ) #186 
> SMP PREEMPT Tue Jun 28 13:28:40 CEST 2016
> 
> #> dmesg | grep "Xenomai"
> [0.844697] I-pipe: Domain Xenomai registered.
> [0.849188] Xenomai: hal/arm started.
> [0.853189] Xenomai: scheduling class idle registered.
> [0.858350] Xenomai: scheduling class rt registered.
> [0.882246] Xenomai: real-time nucleus v2.6.4 (Jumpin' Out) loaded.
> [0.888926] Xenomai: starting native API services.
> [0.893811] Xenomai: starting RTDM services.
> 
> #> dmesg | grep I-pipe
> [0.00] I-pipe 1.18-13: pipeline enabled.
> [0.331174] I-pipe, 396.000 MHz timer
> [0.334900] I-pipe, 396.000 MHz clocksource
> [0.844697] I-pipe: Domain Xenomai registered.
> 
> 
> Here the output of tsc
> min: 10, max: 345, avg: 10.5 -> 0.0265152 us

Ok, so 3.0.43 with 2.6.4 has the same consistent behaviour with
regard to the __xn_rdtsc() latency as 3.0.43 with 2.6.2.1. So:

- if you find 2.6.4 with 3.0.43 slower than 3.0.43 with 2.6.2.1, you
can remove the kernel version change from the mix and do your tests
from now on exclusively with the kernel 3.0.43, and the tsc latency
is unlikely to be the cause of the performance difference. To really
make sure of that, you can replace __xn_rdtsc() in tsc.c with a call
to rt_timer_tsc(), recompile and rerun on the two remaining
configurations.

- if you find 2.6.4 with 3.0.43 is as fast as 3.0.43 with 2.6.2.1,
you can remove the Xenomai version change from the mix and do you
from now on exclusively with Xenomai 2.6.4. And the question about
differences in cache configuration remains.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 12:31:42PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 12:19 schrieb Gilles Chanteperdrix:
> min: 10, max: 677, avg: 10.5048 -> 0.0265273 us
> 
> Here are the output for Kernel 3.0.43 and Xenomai 2.6.2.1
> 
> #> ./tsc
> min: 10, max: 667, avg: 11.5755 -> 0.029231 us

Ok. So, first it confirms that the two configurations are running
the processor at the same frequency. But we seem to see a pattern,
the maxima in the case of the new kernel seems consistently higher.
Which would suggest that there is some difference in the cache. What
is the status of the two configurations with regard to the L2 cache
write allocate policy? Could you show us the tsc results of Xenomai
2.6.4 with the 3.0 kernel ?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 12:10:06PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 11:55 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 11:51:39AM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 11:29 schrieb Gilles Chanteperdrix:
> >>> On Tue, Jun 28, 2016 at 11:28:19AM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-28 um 11:17 schrieb Gilles Chanteperdrix:
> >>>>> On Tue, Jun 28, 2016 at 11:15:14AM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-28 um 10:34 schrieb Gilles Chanteperdrix:
> >>>>>>> On Tue, Jun 28, 2016 at 10:31:00AM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Am 2016-06-27 um 18:46 schrieb Gilles Chanteperdrix:
> >>>>>>>>> On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>> Am 2016-06-07 um 19:00 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>> On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>> On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>>>> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>>>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal 
> >>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>>>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal 
> >>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>>>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal 
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>> Dear all,
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + 
> >>>>>>>>>>>>>>>>>>>> Linux 3.0.43" to
> >>>>>>>>>>>>>>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an 
> >>>>>>>>>>>>>>>>>>>> i.MX6DL. The system
> >>>>>>>>>>>>>>>>>>>> is now up and running and works stable. Unfortunately we 
> >>>>>>>>>>>>>>>>>>>> see a
> >>>>>>>>>>>>>>>>>>>> difference in the performance. Our old combination 
> >>>>>>>>>>>>>>>>>>>> (XENOMAI 2.6.2.1 +
> >>>>>>>>>>>>>>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>>>>>>>>>>>>>> xnpod_schedule_handler much more often then XENOMAI 
> >>>>>>>>>>>>>>>>>>>> 2.6.2.1 in our old
> >>>>>>>>>>>>>>>>>>>> system.  Every call of xnpod_schedule_handler interrupts 
> >>>>>>>>>>>>>>>>>>>> our main
> >>>>>>>>>>>>>>>>>>>> XENOMAI task with priority = 95.
> >>>>>>>>>>>> As I wrote above, I get interrupts 1037 handled by 
> >>>>>>>>>>>> rthal_apc_handler()
> >>>>>>>>>>>> and 1038 handled by xnpod_schedule_handler() while my realtime 
> >>>>>>>>>>>> task
> >>>>>>>>>>>> is running on kernel 3.10.53 with Xenomai 2.6.4.
> >>>>

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 11:51:39AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 11:29 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 11:28:19AM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 11:17 schrieb Gilles Chanteperdrix:
> >>> On Tue, Jun 28, 2016 at 11:15:14AM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-28 um 10:34 schrieb Gilles Chanteperdrix:
> >>>>> On Tue, Jun 28, 2016 at 10:31:00AM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-27 um 18:46 schrieb Gilles Chanteperdrix:
> >>>>>>> On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Am 2016-06-07 um 19:00 schrieb Gilles Chanteperdrix:
> >>>>>>>>> On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>> On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal 
> >>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal 
> >>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>> Dear all,
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + 
> >>>>>>>>>>>>>>>>>> Linux 3.0.43" to
> >>>>>>>>>>>>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an 
> >>>>>>>>>>>>>>>>>> i.MX6DL. The system
> >>>>>>>>>>>>>>>>>> is now up and running and works stable. Unfortunately we 
> >>>>>>>>>>>>>>>>>> see a
> >>>>>>>>>>>>>>>>>> difference in the performance. Our old combination 
> >>>>>>>>>>>>>>>>>> (XENOMAI 2.6.2.1 +
> >>>>>>>>>>>>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>>>>>>>>>>>> xnpod_schedule_handler much more often then XENOMAI 
> >>>>>>>>>>>>>>>>>> 2.6.2.1 in our old
> >>>>>>>>>>>>>>>>>> system.  Every call of xnpod_schedule_handler interrupts 
> >>>>>>>>>>>>>>>>>> our main
> >>>>>>>>>>>>>>>>>> XENOMAI task with priority = 95.
> >>>>>>>>>> As I wrote above, I get interrupts 1037 handled by 
> >>>>>>>>>> rthal_apc_handler()
> >>>>>>>>>> and 1038 handled by xnpod_schedule_handler() while my realtime task
> >>>>>>>>>> is running on kernel 3.10.53 with Xenomai 2.6.4.
> >>>>>>>>>> On kernel 3.0.43 with Xenomai 2.6.4 there are no interrupts, 
> >>>>>>>>>> except the
> >>>>>>>>>> once that are send by my board using GPIOs, but this virtual 
> >>>>>>>>>> interrupts
> >>>>>>>>>> are assigned to Xenomai and Linux as well but I didn't see a 
> >>>>>>>>>> handler
> >>>>>>>>>> installed.
> >>>>>>>>>> I'm pretty sure that 

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 11:28:19AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 11:17 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 11:15:14AM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-28 um 10:34 schrieb Gilles Chanteperdrix:
> >>> On Tue, Jun 28, 2016 at 10:31:00AM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-27 um 18:46 schrieb Gilles Chanteperdrix:
> >>>>> On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-07 um 19:00 schrieb Gilles Chanteperdrix:
> >>>>>>> On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> >>>>>>>>> On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal 
> >>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>> Dear all,
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 
> >>>>>>>>>>>>>>>> 3.0.43" to
> >>>>>>>>>>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. 
> >>>>>>>>>>>>>>>> The system
> >>>>>>>>>>>>>>>> is now up and running and works stable. Unfortunately we see 
> >>>>>>>>>>>>>>>> a
> >>>>>>>>>>>>>>>> difference in the performance. Our old combination (XENOMAI 
> >>>>>>>>>>>>>>>> 2.6.2.1 +
> >>>>>>>>>>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>>>>>>>>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 
> >>>>>>>>>>>>>>>> in our old
> >>>>>>>>>>>>>>>> system.  Every call of xnpod_schedule_handler interrupts our 
> >>>>>>>>>>>>>>>> main
> >>>>>>>>>>>>>>>> XENOMAI task with priority = 95.
> >>>>>>>> As I wrote above, I get interrupts 1037 handled by 
> >>>>>>>> rthal_apc_handler()
> >>>>>>>> and 1038 handled by xnpod_schedule_handler() while my realtime task
> >>>>>>>> is running on kernel 3.10.53 with Xenomai 2.6.4.
> >>>>>>>> On kernel 3.0.43 with Xenomai 2.6.4 there are no interrupts, except 
> >>>>>>>> the
> >>>>>>>> once that are send by my board using GPIOs, but this virtual 
> >>>>>>>> interrupts
> >>>>>>>> are assigned to Xenomai and Linux as well but I didn't see a handler
> >>>>>>>> installed.
> >>>>>>>> I'm pretty sure that these interrupts are slowing down my system, but
> >>>>>>>> where do they come from ?
> >>>>>>>> why didn't I see them on Kernel 3.0.43 with Xenomai 2.6.4 ?
> >>>>>>>> how long do they need to process ?
> >>>>>>> How do you mean you do not see them? If you are talking about the
> >>>>>>> rescheduling API, it used no to be bound to a virq (so, it would
> >>>>>>> have a different irq number on cortex A9, something between 0 and 31
> >>>>>>> tha

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 11:15:14AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-28 um 10:34 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 28, 2016 at 10:31:00AM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-27 um 18:46 schrieb Gilles Chanteperdrix:
> >>> On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-07 um 19:00 schrieb Gilles Chanteperdrix:
> >>>>> On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> >>>>>>> On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>>>>>>>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>>>> Dear all,
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 
> >>>>>>>>>>>>>> 3.0.43" to
> >>>>>>>>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. 
> >>>>>>>>>>>>>> The system
> >>>>>>>>>>>>>> is now up and running and works stable. Unfortunately we see a
> >>>>>>>>>>>>>> difference in the performance. Our old combination (XENOMAI 
> >>>>>>>>>>>>>> 2.6.2.1 +
> >>>>>>>>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>>>>>>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in 
> >>>>>>>>>>>>>> our old
> >>>>>>>>>>>>>> system.  Every call of xnpod_schedule_handler interrupts our 
> >>>>>>>>>>>>>> main
> >>>>>>>>>>>>>> XENOMAI task with priority = 95.
> >>>>>> As I wrote above, I get interrupts 1037 handled by rthal_apc_handler()
> >>>>>> and 1038 handled by xnpod_schedule_handler() while my realtime task
> >>>>>> is running on kernel 3.10.53 with Xenomai 2.6.4.
> >>>>>> On kernel 3.0.43 with Xenomai 2.6.4 there are no interrupts, except the
> >>>>>> once that are send by my board using GPIOs, but this virtual interrupts
> >>>>>> are assigned to Xenomai and Linux as well but I didn't see a handler
> >>>>>> installed.
> >>>>>> I'm pretty sure that these interrupts are slowing down my system, but
> >>>>>> where do they come from ?
> >>>>>> why didn't I see them on Kernel 3.0.43 with Xenomai 2.6.4 ?
> >>>>>> how long do they need to process ?
> >>>>> How do you mean you do not see them? If you are talking about the
> >>>>> rescheduling API, it used no to be bound to a virq (so, it would
> >>>>> have a different irq number on cortex A9, something between 0 and 31
> >>>>> that would not show in the usual /proc files), I wonder if 3.0 is
> >>>>> before or after that. You do not see them in /proc, or you see them
> >>>>> and their count does not increase?
> >>>> Sorry for the long delay, we ran a lot of tests to find out what could
> >>>> be the reason for
> >>>> the performance difference.
> >>>>
> >>>> If I call cat /proc/ipipe/Xenomai I dont see the IRQ handler assigned to
> >>>> the virtual
> >>>> IRQ on Kernel 3.0.43, but it looks like thats an issue of the Kernel
> >>>>> As for where they come from, this is not a mystery, the reschedule
> >>>

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-28 Thread Gilles Chanteperdrix
On Tue, Jun 28, 2016 at 10:31:00AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-27 um 18:46 schrieb Gilles Chanteperdrix:
> > On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-07 um 19:00 schrieb Gilles Chanteperdrix:
> >>> On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> >>>>> On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>>>>>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>>>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>>>> Dear all,
> >>>>>>>>>>>>
> >>>>>>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 
> >>>>>>>>>>>> 3.0.43" to
> >>>>>>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The 
> >>>>>>>>>>>> system
> >>>>>>>>>>>> is now up and running and works stable. Unfortunately we see a
> >>>>>>>>>>>> difference in the performance. Our old combination (XENOMAI 
> >>>>>>>>>>>> 2.6.2.1 +
> >>>>>>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>>>>>
> >>>>>>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>>>>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in 
> >>>>>>>>>>>> our old
> >>>>>>>>>>>> system.  Every call of xnpod_schedule_handler interrupts our main
> >>>>>>>>>>>> XENOMAI task with priority = 95.
> >>>> As I wrote above, I get interrupts 1037 handled by rthal_apc_handler()
> >>>> and 1038 handled by xnpod_schedule_handler() while my realtime task
> >>>> is running on kernel 3.10.53 with Xenomai 2.6.4.
> >>>> On kernel 3.0.43 with Xenomai 2.6.4 there are no interrupts, except the
> >>>> once that are send by my board using GPIOs, but this virtual interrupts
> >>>> are assigned to Xenomai and Linux as well but I didn't see a handler
> >>>> installed.
> >>>> I'm pretty sure that these interrupts are slowing down my system, but
> >>>> where do they come from ?
> >>>> why didn't I see them on Kernel 3.0.43 with Xenomai 2.6.4 ?
> >>>> how long do they need to process ?
> >>> How do you mean you do not see them? If you are talking about the
> >>> rescheduling API, it used no to be bound to a virq (so, it would
> >>> have a different irq number on cortex A9, something between 0 and 31
> >>> that would not show in the usual /proc files), I wonder if 3.0 is
> >>> before or after that. You do not see them in /proc, or you see them
> >>> and their count does not increase?
> >> Sorry for the long delay, we ran a lot of tests to find out what could
> >> be the reason for
> >> the performance difference.
> >>
> >> If I call cat /proc/ipipe/Xenomai I dont see the IRQ handler assigned to
> >> the virtual
> >> IRQ on Kernel 3.0.43, but it looks like thats an issue of the Kernel
> >>> As for where they come from, this is not a mystery, the reschedule
> >>> IPI is triggered when code on one cpu changes the scheduler state
> >>> (wakes up a thread for instance) on another cpu. If you want to
> >>> avoid it, do not do that. That means, do not share mutex between
> >>> threads running on different cpus, pay attention for timers to be
> >>> running on the same cpu as the thread they signal, etc...
> >>>
> >>> The APC virq is used to multiplex several services, which you can
> >>> find by grepping the sources for rthal_apc_alloc:
> >>> ./ksrc/skins/posix/apc.c:   pse51_lo

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-27 Thread Gilles Chanteperdrix
On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-07 um 19:00 schrieb Gilles Chanteperdrix:
> > On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> >>> On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>>>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>>>> Dear all,
> >>>>>>>>>>
> >>>>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 
> >>>>>>>>>> 3.0.43" to
> >>>>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The 
> >>>>>>>>>> system
> >>>>>>>>>> is now up and running and works stable. Unfortunately we see a
> >>>>>>>>>> difference in the performance. Our old combination (XENOMAI 
> >>>>>>>>>> 2.6.2.1 +
> >>>>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>>>
> >>>>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in our 
> >>>>>>>>>> old
> >>>>>>>>>> system.  Every call of xnpod_schedule_handler interrupts our main
> >>>>>>>>>> XENOMAI task with priority = 95.
> >> As I wrote above, I get interrupts 1037 handled by rthal_apc_handler()
> >> and 1038 handled by xnpod_schedule_handler() while my realtime task
> >> is running on kernel 3.10.53 with Xenomai 2.6.4.
> >> On kernel 3.0.43 with Xenomai 2.6.4 there are no interrupts, except the
> >> once that are send by my board using GPIOs, but this virtual interrupts
> >> are assigned to Xenomai and Linux as well but I didn't see a handler
> >> installed.
> >> I'm pretty sure that these interrupts are slowing down my system, but
> >> where do they come from ?
> >> why didn't I see them on Kernel 3.0.43 with Xenomai 2.6.4 ?
> >> how long do they need to process ?
> > How do you mean you do not see them? If you are talking about the
> > rescheduling API, it used no to be bound to a virq (so, it would
> > have a different irq number on cortex A9, something between 0 and 31
> > that would not show in the usual /proc files), I wonder if 3.0 is
> > before or after that. You do not see them in /proc, or you see them
> > and their count does not increase?
> Sorry for the long delay, we ran a lot of tests to find out what could 
> be the reason for
> the performance difference.
> 
> If I call cat /proc/ipipe/Xenomai I dont see the IRQ handler assigned to 
> the virtual
> IRQ on Kernel 3.0.43, but it looks like thats an issue of the Kernel
> > As for where they come from, this is not a mystery, the reschedule
> > IPI is triggered when code on one cpu changes the scheduler state
> > (wakes up a thread for instance) on another cpu. If you want to
> > avoid it, do not do that. That means, do not share mutex between
> > threads running on different cpus, pay attention for timers to be
> > running on the same cpu as the thread they signal, etc...
> >
> > The APC virq is used to multiplex several services, which you can
> > find by grepping the sources for rthal_apc_alloc:
> > ./ksrc/skins/posix/apc.c:   pse51_lostage_apc = 
> > rthal_apc_alloc("pse51_lostage_handler",
> > ./ksrc/skins/rtdm/device.c: rtdm_apc = rthal_apc_alloc("deferred RTDM 
> > close", rtdm_apc_handler,
> > ./ksrc/nucleus/registry.c:  rthal_apc_alloc("registry_export", 
> > ®istry_proc_schedule, NULL);
> > ./ksrc/nucleus/pipe.c:  rthal_apc_alloc("pipe_wakeup", 
> > &xnpipe_wakeup_proc, NULL);
> > ./ksrc/nucleus/shadow.c:rthal_apc_alloc("lostage_handler", 
> > &lostage_handler, NULL);
> > ./ksrc/nucleus/select.c:

Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-27 Thread Gilles Chanteperdrix
On Mon, Jun 27, 2016 at 05:55:12PM +0200, Wolfgang Netbal wrote:
> -Creating Kernel 3.0.43 with
>  Xenomai 2.6.4 and copy it to new system-> still 7% slower

This contradicts what you said here:
https://xenomai.org/pipermail/xenomai/2016-June/036370.html

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] xenomai patch for kernel & omap

2016-06-27 Thread Gilles Chanteperdrix
On Mon, Jun 27, 2016 at 02:37:22PM +0300, Ran Shalit wrote:
> On Mon, Jun 27, 2016 at 2:12 PM, Gilles Chanteperdrix <
> gilles.chanteperd...@xenomai.org> wrote:
> 
> > On Mon, Jun 27, 2016 at 02:05:35PM +0300, Ran Shalit wrote:
> > > On Mon, Jun 27, 2016 at 1:59 PM, Gilles Chanteperdrix <
> > > gilles.chanteperd...@xenomai.org> wrote:
> > >
> > > > On Mon, Jun 27, 2016 at 01:58:21PM +0300, Ran Shalit wrote:
> > > > > Hello,
> > > > >
> > > > > I need to patch a kernel version (which I don't know yet the exact
> > > > version)
> > > > > for OMAP4 processor.
> > > > >
> > > > > I've seen the following post:
> > > > > https://xenomai.org/pipermail/xenomai/2015-March/033649.html
> > > > >
> > > > > which explain that patch x.y.z should support ALL kernel version up
> > to
> > > > x.y.z
> > > > >
> > > > > If I understand correctly,
> > > >
> > > > You did not understand correctly.
> > > >
> > > > --
> > > > Gilles.
> > > > https://click-hack.org
> > > >
> > >
> > > Hi Gilles,
> > >
> > > When I used xenomai before (with zynq) , I worked with the exact kernel
> > > version 3.8.13 as this patch requires (trying to merge older kernel was
> > > too difficult).
> > >
> > > Is it the correct thing to do ?
> > > I mean, when we have some old kernel, i.e. to try to merge it with the
> > > required kernel version as the patch requires ?
> >
> > There is no "correct thing". Do the one that is the easiest for you.
> > For me, the simpler solution is to use a kernel for which
> > an I-ipipe patch already exists, but YMMV.
> >
> > You have a large choice of them here:
> > https://xenomai.org/downloads/ipipe/v3.x/arm/
> > and here:
> > https://xenomai.org/downloads/ipipe/v4.x/arm/
> >
> > The rule for matching I-pipe versions with Xenomai versions is
> > explained in the mail you quote.
> >
> > --
> > Gilles.
> > https://click-hack.org
> >
> 
> Right,
> I now found the detailed explanation in the reply (I missed that part
> before):
> https://xenomai.org/pipermail/xenomai/2015-March/033648.html
> 
> Another thing if I may:
> There is no "omap" processor pre/post patches. Does it mean "arm" patch is
> relevant for omap ?

You will find the list of supported processors here, whether they
need a pre/post patch is also indicated:
https://xenomai.org/embedded-hardware/#ARM

Regards.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] xenomai patch for kernel & omap

2016-06-27 Thread Gilles Chanteperdrix
On Mon, Jun 27, 2016 at 02:05:35PM +0300, Ran Shalit wrote:
> On Mon, Jun 27, 2016 at 1:59 PM, Gilles Chanteperdrix <
> gilles.chanteperd...@xenomai.org> wrote:
> 
> > On Mon, Jun 27, 2016 at 01:58:21PM +0300, Ran Shalit wrote:
> > > Hello,
> > >
> > > I need to patch a kernel version (which I don't know yet the exact
> > version)
> > > for OMAP4 processor.
> > >
> > > I've seen the following post:
> > > https://xenomai.org/pipermail/xenomai/2015-March/033649.html
> > >
> > > which explain that patch x.y.z should support ALL kernel version up to
> > x.y.z
> > >
> > > If I understand correctly,
> >
> > You did not understand correctly.
> >
> > --
> > Gilles.
> > https://click-hack.org
> >
> 
> Hi Gilles,
> 
> When I used xenomai before (with zynq) , I worked with the exact kernel
> version 3.8.13 as this patch requires (trying to merge older kernel was
> too difficult).
> 
> Is it the correct thing to do ?
> I mean, when we have some old kernel, i.e. to try to merge it with the
> required kernel version as the patch requires ?

There is no "correct thing". Do the one that is the easiest for you.
For me, the simpler solution is to use a kernel for which
an I-ipipe patch already exists, but YMMV.

You have a large choice of them here:
https://xenomai.org/downloads/ipipe/v3.x/arm/
and here:
https://xenomai.org/downloads/ipipe/v4.x/arm/

The rule for matching I-pipe versions with Xenomai versions is
explained in the mail you quote.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] xenomai patch for kernel & omap

2016-06-27 Thread Gilles Chanteperdrix
On Mon, Jun 27, 2016 at 01:58:21PM +0300, Ran Shalit wrote:
> Hello,
> 
> I need to patch a kernel version (which I don't know yet the exact version)
> for OMAP4 processor.
> 
> I've seen the following post:
> https://xenomai.org/pipermail/xenomai/2015-March/033649.html
> 
> which explain that patch x.y.z should support ALL kernel version up to x.y.z
> 
> If I understand correctly,

You did not understand correctly.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] socket XDDP recvfrom timeout ?

2016-06-24 Thread Gilles Chanteperdrix
On Fri, Jun 24, 2016 at 04:37:12PM +0200, Laurent LEQUIEVRE wrote:
> forgot to specify : Xenomai 3.0.2

Just a simple reminder. The Xenomai mailing list is a "subscriber
only" mailing list. Which means you must be subscribed to be allowed
to post. And when I say you, I mean "one e-mail address". So, if you
want to use a second mail address, you must subscribe this second
mail address (and disable mail delivery if you do not want to
receive the e-mails twice).
-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai 3 vs 2.6: rtdm_irq_request() calling task context change

2016-06-21 Thread Gilles Chanteperdrix
On Tue, Jun 21, 2016 at 11:22:37AM +0200, Stephane Grosjean wrote:
> Hi,
> 
> https://xenomai.org/migrating-from-xenomai-2-x-to-3-x/ says:
> 
> "rtdm_irq_request/free() and rtdm_irq_enable/disable() call pairs must 
> be called from a Linux task context, which is a restriction that did not 
> exist previously with Xenomai 2.x."
> 
> and, a few lower lines:
> 
> "Since allocating, releasing, enabling or disabling real-time interrupts 
> is most commonly done from driver initialization/cleanup context already,"
> 
> I was not aware about these Linux kernel driver requirements...

These are Xenomai requirements, not Linux kernel driver requirements.

> On the 
> contrary, regarding linux-can drivers (for example), the IRQ is 
> requested when the device is opened only, not when it is probed. An it 
> is freed when the device is closed.

the ->open_rt and ->close_rt callbacks in RTDM drivers have not been
used in in-tree RTDM drivers for some time in 2.x. In 3.x, this is
official, you can no longer implement primary mode open/close, see:
https://xenomai.org/migrating-from-xenomai-2-x-to-3-x/#Updated_device_operation_descriptor

So, open and close can be considered initialization/cleanup context.

>  In my mind, this was a good choice 
> because it avoids useless chained ISRs during the times the devices are 
> not opened. So, the above assumption is a little rough, IMHO...
> 
> Anyway, this is how new things are... Any advice for us not to change 
> the whole architecture of our driver will be welcome.

I believe you do not need to change the whole architecture of your
driver.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] slackspot

2016-06-20 Thread Gilles Chanteperdrix
On Mon, Jun 20, 2016 at 01:15:38PM -0400, Lowell Gilbert wrote:
> Gilles Chanteperdrix  writes:
> 
> > There is one possibility: if your thread runs with the
> > SCHED_OTHER/SCHED_WEAK scheduling policy, I would not expect the
> > relaxes to trigger anything, as you have been asking for them by
> > choosing to use these policies.
> 
> Yes, I should have mentioned that topic. I have run this under both
> SCHED_FIFO and SCHED_RR.

Ok. Could you post an example allowing us to reproduce the issue ?


-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] slackspot

2016-06-20 Thread Gilles Chanteperdrix
On Mon, Jun 20, 2016 at 11:19:49AM -0400, Lowell Gilbert wrote:
> Philippe Gerum  writes:
> 
> > On 06/16/2016 04:43 PM, Lowell Gilbert wrote:
> >>  I have not run /bin/relax,
> >> mentioned in the manual page, because I don't have such a program and
> >> can't find any reference to it.
> >> 
> >> I am using 3.0.2.
> >
> >
> > /bin/relax is only the name of a fake executable that would have
> > produced the typical output described in the example, read /foo/bar if
> > that helps.
> 
> That seems obvious now that you've explained it. 
> 
> Maybe something like the following would help:
> --- doc/asciidoc/man1/slackspot.adoc.~1~ 2016-02-18 07:17:40.0
> -0500
> +++ doc/asciidoc/man1/slackspot.adoc2016-06-20 11:16:22.130252140
> -0400
> @@ -91,7 +91,7 @@
> 
>  In the following scenario, the _target_ system built with the
>   CONFIG_XENO_OPT_DEBUG_TRACE_RELAX feature enabled in the kernel
>   -configuration, just ran the _/bin/relax_ program.
>   +configuration, just ran the Xenomai-enabled _/bin/relax_ program.
> 
>  This program caused a transition to secondary mode switch of the
>   current task (_Task 2_) as a result of calling +putchar()+. The Cobalt
> 
> > Once TRACE_RELAX is enabled in the kernel, running any application that
> > causes switches to secondary mode should populate
> > /proc/xenomai/debug/relax with event records.
> 
> Hmm. I'm still not getting the records, and the SIGXCPU method isn't
> giving me anything either. But the MSW, CSW, and XSC counts are all
> rising at exactly the rate that the thread wakes up.
> 
> Here's the design: I have a POSIX thread which calls into an RTDM ioctl
> which blocks on an RTDM event, then copies a couple of dozen bytes of
> data back to the POSIX thread. The POSIX thread does some calculations
> on the data, then uses another ioctl to write the results back to the
> hardware. And repeats.
> 
> Is it possible that the thread might be relaxing in the kernel (RTDM
> ioctl) and not receiving signals as as result?
> 
> I have two theories about what is happening.
>1. I have made an error somewhere that is resulting in unnecessary
>   relax events. In that case, getting the stack traces is the key to
>   finding said error. I can create a small example if that helps me
>   get assistance in looking at it.
>2. Having a real-time pthread block in an RTDM ioctl may be an
>   inherently bad idea. I notice that all synchronization primitives
>   are characterized by "might-switch". Perhaps I should be using a
>   device read to block the POSIX thread instead of doing it in an
>   ioctl()?
> 
> My current plan is to use the I-pipe tracer, but if there is an easier
> way to move forward, I would appreciate advice to that effect.

There is one possibility: if your thread runs with the
SCHED_OTHER/SCHED_WEAK scheduling policy, I would not expect the
relaxes to trigger anything, as you have been asking for them by
choosing to use these policies.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-19 Thread Gilles Chanteperdrix
On Sun, Jun 19, 2016 at 07:28:43PM +0200, Philippe Gerum wrote:
> On 06/18/2016 03:37 PM, Gilles Chanteperdrix wrote:
> > On Sat, Jun 18, 2016 at 03:21:40PM +0200, Philippe Gerum wrote:
> >> On 06/18/2016 03:08 PM, Gilles Chanteperdrix wrote:
> >>> Hi Philippe,
> >>>
> >>> it seems since I-pipe commit
> >>> b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
> >>> Revert "ipipe: Register function tracer for direct and exclusive 
> >>> invocation"
> >>> 
> >>> This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.
> >>> 
> >>> We now have an I-pipe-compatible dispatching function for ftrace.
> >>>
> >>> The ftrace dispatching function causes the following warning at
> >>> boot on x86_32 with all warnings/debugs enabled:
> >>> [4.730812] I-pipe: head domain Xenomai registered.
> >>> [4.737967] I-pipe: Detected illicit call from head domain 'Xenomai'
> >>> [4.737967] into a regular Linux service
> >>>
> >>> Because it calls preempt_disable(), which is not safe to be called
> >>> form root domain, when runnning over 2.6.x on an architecture such
> >>> as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.
> >>>
> >>> Should we make the ftrace dispatching function really I-pipe
> >>> compatible by calling ipipe_preempt_disable() in that case instead?
> >>> or should we make the patch revert conditional to !IPIPE_LEGACY or
> >>> IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
> >>> tracer work in that case).
> >>>
> >>
> >> I would go for the change which has the lesser impact on the mainline
> >> code; that would be option #1.
> > 
> > Ok. Something else. Commit fdb5d54d04b8c3b6b6a6ad7ac2b6248cf0b415e0:
> > ipipe: Avoid rescheduling from __ftrace_ops_list_func in illegal 
> > contexts
> > 
> > The ftrace callback dispatcher may be called with hard irqs disabled or
> > even over the head domain. We cannot allow any rescheduling in that
> > case, but also do not have: only non-urgent events are expected to be
> > kicked-off from ftrace callbacks, and those will at latest be handled on
> > the next Linux timer tick.
> > 
> > Added the following code to the ftrace dispatch function:
> > +#ifdef CONFIG_IPIPE
> > +   if (hard_irqs_disabled() || !__ipipe_root_p)
> > +   /*
> > +* Nothing urgent to schedule here. At latest the timer tick
> > +* will pick up whatever the tracing functions kicked off.
> > +*/
> > +   preempt_enable_no_resched_notrace();
> > +   else
> > +#endif
> > +   preempt_enable_notrace();
> > 
> > Should not this go to the generic definition of preempt_enable() ? I
> > mean if in the !LEGACY case, it is now legal to call
> > preempt_enable() over non root contexts or with hard irqs off, so,
> > does it really make sense to fix all the preempt_enable() spots one
> > by one?
> > 
> 
> That would add a useless branch to hundreds of code paths, which can
> never run with hard irqs off and/or over the head domain. Besides, this
> might prevent the context check in schedule_debug() to run, papering
> over a bug. On the other end, specifically annotating the very few code
> spots affected by the pipeline, which must not reschedule after
> re-enabling preemption seems better maintenance-wise.
> 
> I would just use a specific annotation folding all the lengthy code
> block above into a single statement.

Yeah, well, the above code does not work anyway. See other mails
about ftrace.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] need help to Xenomai Cobalt RTnet socket UDP recvfrom non blocking

2016-06-18 Thread Gilles Chanteperdrix
On Fri, Jun 17, 2016 at 03:04:54PM +0200, laurent LEQUIEVRE wrote:
> Hi Gilles,
> 
> It works fine with :
> int64_t timeout = 3e09;  // 3 seconds to test
> int return_ioctl = ioctl(udp_socket,RTNET_RTIOC_TIMEOUT,&timeout);
> 
> but it needed to add these 'includes' to compile :
> #include 
> #include 
> #define RTIOC_TYPE_NETWORK  RTDM_CLASS_NETWORK
> #define RTNET_RTIOC_TIMEOUT _IOW(RTIOC_TYPE_NETWORK,  0x11, int64_t)
> 
> I read in the documentation that rtnet is included in xenomai 3, why there
> is no rtnet.h file installed with the xenomai include files 

I simply forgot that part. I have used the headers for programs in
Xenomai sources only, where the headers are available. This needs a
cleanup, as the headers mix kernel and user declarations, and in
3.x, the RTDM drivers headers no longer start with rt. So rtnet.h
would probably become rtdm/net.h, and some part of it would have to
move to rtdm/uapi/net.h.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-18 Thread Gilles Chanteperdrix
On Sat, Jun 18, 2016 at 08:20:31PM +0200, Gilles Chanteperdrix wrote:
> On Sat, Jun 18, 2016 at 07:20:41PM +0200, Gilles Chanteperdrix wrote:
> > On Sat, Jun 18, 2016 at 03:52:35PM +0200, Gilles Chanteperdrix wrote:
> > > On Sat, Jun 18, 2016 at 03:21:40PM +0200, Philippe Gerum wrote:
> > > > On 06/18/2016 03:08 PM, Gilles Chanteperdrix wrote:
> > > > > Hi Philippe,
> > > > > 
> > > > > it seems since I-pipe commit
> > > > > b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
> > > > > Revert "ipipe: Register function tracer for direct and exclusive 
> > > > > invocation"
> > > > > 
> > > > > This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.
> > > > > 
> > > > > We now have an I-pipe-compatible dispatching function for ftrace.
> > > > > 
> > > > > The ftrace dispatching function causes the following warning at
> > > > > boot on x86_32 with all warnings/debugs enabled:
> > > > > [4.730812] I-pipe: head domain Xenomai registered.
> > > > > [4.737967] I-pipe: Detected illicit call from head domain 
> > > > > 'Xenomai'
> > > > > [4.737967] into a regular Linux service
> > > > > 
> > > > > Because it calls preempt_disable(), which is not safe to be called
> > > > > form root domain, when runnning over 2.6.x on an architecture such
> > > > > as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.
> > > > > 
> > > > > Should we make the ftrace dispatching function really I-pipe
> > > > > compatible by calling ipipe_preempt_disable() in that case instead?
> > > > > or should we make the patch revert conditional to !IPIPE_LEGACY or
> > > > > IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
> > > > > tracer work in that case).
> > > > > 
> > > > 
> > > > I would go for the change which has the lesser impact on the mainline
> > > > code; that would be option #1.
> > > 
> > > Ok, now I get:
> > > 
> > >12.958818] I-pipe: Detected stalled head domain, probably caused by a 
> > > bug.
> > > [   12.958818] A critical section may have been left unterminated.
> > > 
> > > I guess in the following sequence:
> > > 
> > > #define hard_preempt_disable()\
> > >   ({  \
> > >   unsigned long __flags__;\
> > >   __flags__ = hard_local_irq_save();  \
> > >   if (__ipipe_root_p) \
> > >   preempt_disable();  \
> > >   __flags__;  \
> > >   })
> > > 
> > > preempt_disable() gets called with the head domain stalled, while
> > > current domain is root, and ipipe_root_only() treats this as an
> > > error.
> > 
> > So, I have disabled context checking around the call to
> > preempt_disable(), see the following patch, but the tracer overhead
> > becomes huge.
> 
> We get a lot of recursion, this is the probable cause of the
> overhead.

In summary: I think the shortest solution for now is to just revert
b115c4094d734e19fa7a96be1bf3958b3d244b8b

Using ftrace with I-pipe currently does not work. First there is the
problem of preempt_disable/preempt_enable with legacy, but there is
also the problem of recursion: if ftrace gets called again (for
instance if there is an irq) while in __ftrace_ops_list_func,
recursion is detected and nothing gets recorded. To convince
yourself that it does in fact, happen a lot, you can try the
following patch:

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index f8b9472..35f6d0f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4894,9 +4894,16 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long 
parent_ip,
struct ftrace_ops *op;
int bit;
 
+   if (ftrace_disabled)
+   return;
+
bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX);
-   if (bit < 0)
+   if (bit < 0) {
+   ftrace_disabled = 1;
+   printk("Recursion at %pF/%pF\n", (void *)ip, (void *)parent_ip);
+   ftrace_disabled = 0;
return;
+   }
 
/*
 * Some of the ops may be dynamically allocated,

To fix this, we would probably need some more bits in
trace_test_and_set_recursion, and some instrumentation of
the I-pipe core to detect the additional contexts (like say, "head 
domain in interrupt").

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-18 Thread Gilles Chanteperdrix
On Sat, Jun 18, 2016 at 07:20:41PM +0200, Gilles Chanteperdrix wrote:
> On Sat, Jun 18, 2016 at 03:52:35PM +0200, Gilles Chanteperdrix wrote:
> > On Sat, Jun 18, 2016 at 03:21:40PM +0200, Philippe Gerum wrote:
> > > On 06/18/2016 03:08 PM, Gilles Chanteperdrix wrote:
> > > > Hi Philippe,
> > > > 
> > > > it seems since I-pipe commit
> > > > b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
> > > > Revert "ipipe: Register function tracer for direct and exclusive 
> > > > invocation"
> > > > 
> > > > This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.
> > > > 
> > > > We now have an I-pipe-compatible dispatching function for ftrace.
> > > > 
> > > > The ftrace dispatching function causes the following warning at
> > > > boot on x86_32 with all warnings/debugs enabled:
> > > > [4.730812] I-pipe: head domain Xenomai registered.
> > > > [4.737967] I-pipe: Detected illicit call from head domain 'Xenomai'
> > > > [4.737967] into a regular Linux service
> > > > 
> > > > Because it calls preempt_disable(), which is not safe to be called
> > > > form root domain, when runnning over 2.6.x on an architecture such
> > > > as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.
> > > > 
> > > > Should we make the ftrace dispatching function really I-pipe
> > > > compatible by calling ipipe_preempt_disable() in that case instead?
> > > > or should we make the patch revert conditional to !IPIPE_LEGACY or
> > > > IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
> > > > tracer work in that case).
> > > > 
> > > 
> > > I would go for the change which has the lesser impact on the mainline
> > > code; that would be option #1.
> > 
> > Ok, now I get:
> > 
> >12.958818] I-pipe: Detected stalled head domain, probably caused by a 
> > bug.
> > [   12.958818] A critical section may have been left unterminated.
> > 
> > I guess in the following sequence:
> > 
> > #define hard_preempt_disable()  \
> > ({  \
> > unsigned long __flags__;\
> > __flags__ = hard_local_irq_save();  \
> > if (__ipipe_root_p) \
> > preempt_disable();  \
> > __flags__;  \
> > })
> > 
> > preempt_disable() gets called with the head domain stalled, while
> > current domain is root, and ipipe_root_only() treats this as an
> > error.
> 
> So, I have disabled context checking around the call to
> preempt_disable(), see the following patch, but the tracer overhead
> becomes huge.

We get a lot of recursion, this is the probable cause of the
overhead.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-18 Thread Gilles Chanteperdrix
On Sat, Jun 18, 2016 at 03:52:35PM +0200, Gilles Chanteperdrix wrote:
> On Sat, Jun 18, 2016 at 03:21:40PM +0200, Philippe Gerum wrote:
> > On 06/18/2016 03:08 PM, Gilles Chanteperdrix wrote:
> > > Hi Philippe,
> > > 
> > > it seems since I-pipe commit
> > > b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
> > > Revert "ipipe: Register function tracer for direct and exclusive 
> > > invocation"
> > > 
> > > This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.
> > > 
> > > We now have an I-pipe-compatible dispatching function for ftrace.
> > > 
> > > The ftrace dispatching function causes the following warning at
> > > boot on x86_32 with all warnings/debugs enabled:
> > > [4.730812] I-pipe: head domain Xenomai registered.
> > > [4.737967] I-pipe: Detected illicit call from head domain 'Xenomai'
> > > [4.737967] into a regular Linux service
> > > 
> > > Because it calls preempt_disable(), which is not safe to be called
> > > form root domain, when runnning over 2.6.x on an architecture such
> > > as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.
> > > 
> > > Should we make the ftrace dispatching function really I-pipe
> > > compatible by calling ipipe_preempt_disable() in that case instead?
> > > or should we make the patch revert conditional to !IPIPE_LEGACY or
> > > IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
> > > tracer work in that case).
> > > 
> > 
> > I would go for the change which has the lesser impact on the mainline
> > code; that would be option #1.
> 
> Ok, now I get:
> 
>12.958818] I-pipe: Detected stalled head domain, probably caused by a bug.
> [   12.958818] A critical section may have been left unterminated.
> 
> I guess in the following sequence:
> 
> #define hard_preempt_disable()\
>   ({  \
>   unsigned long __flags__;\
>   __flags__ = hard_local_irq_save();  \
>   if (__ipipe_root_p) \
>   preempt_disable();  \
>   __flags__;  \
>   })
> 
> preempt_disable() gets called with the head domain stalled, while
> current domain is root, and ipipe_root_only() treats this as an
> error.

So, I have disabled context checking around the call to
preempt_disable(), see the following patch, but the tracer overhead
becomes huge.

diff --git a/include/linux/ipipe_base.h b/include/linux/ipipe_base.h
index a37358c..4ea7499 100644
--- a/include/linux/ipipe_base.h
+++ b/include/linux/ipipe_base.h
@@ -311,6 +311,8 @@ static inline void __ipipe_report_cleanup(struct mm_struct 
*mm) { }
 
 static inline void __ipipe_init_taskinfo(struct task_struct *p) { }
 
+#define hard_preempt_disable_notrace() ({ preempt_disable_notrace(); 0; })
+#define hard_preempt_enable_notrace(flags) ({ preempt_enable_notrace(); 
(void)(flags); })
 #define hard_preempt_disable() ({ preempt_disable(); 0; })
 #define hard_preempt_enable(flags) ({ preempt_enable(); (void)(flags); })
 
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 7fdf00b..216c0c9 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -126,23 +126,52 @@ do { \
 #endif /* CONFIG_PREEMPT_COUNT */
 
 #ifdef CONFIG_IPIPE
-#define hard_preempt_disable() \
-   ({  \
-   unsigned long __flags__;\
-   __flags__ = hard_local_irq_save();  \
-   if (__ipipe_root_p) \
-   preempt_disable();  \
-   __flags__;  \
+#define hard_preempt_disable_notrace() \
+   ({  \
+   unsigned long __flags__;\
+   __flags__ = hard_local_irq_save_notrace();  \
+   if (__ipipe_root_p) {   \
+   int __state__ = ipipe_disable_context_check();  \
+   preempt_disable_notrace();  \
+   ipipe_restore_context_check(__state__); \
+   }   \
+   __flags__;  \
})
 
-#define hard_preempt_enable

Re: [Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-18 Thread Gilles Chanteperdrix
On Sat, Jun 18, 2016 at 03:21:40PM +0200, Philippe Gerum wrote:
> On 06/18/2016 03:08 PM, Gilles Chanteperdrix wrote:
> > Hi Philippe,
> > 
> > it seems since I-pipe commit
> > b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
> > Revert "ipipe: Register function tracer for direct and exclusive 
> > invocation"
> > 
> > This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.
> > 
> > We now have an I-pipe-compatible dispatching function for ftrace.
> > 
> > The ftrace dispatching function causes the following warning at
> > boot on x86_32 with all warnings/debugs enabled:
> > [4.730812] I-pipe: head domain Xenomai registered.
> > [4.737967] I-pipe: Detected illicit call from head domain 'Xenomai'
> > [4.737967] into a regular Linux service
> > 
> > Because it calls preempt_disable(), which is not safe to be called
> > form root domain, when runnning over 2.6.x on an architecture such
> > as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.
> > 
> > Should we make the ftrace dispatching function really I-pipe
> > compatible by calling ipipe_preempt_disable() in that case instead?
> > or should we make the patch revert conditional to !IPIPE_LEGACY or
> > IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
> > tracer work in that case).
> > 
> 
> I would go for the change which has the lesser impact on the mainline
> code; that would be option #1.

Ok, now I get:

   12.958818] I-pipe: Detected stalled head domain, probably caused by a bug.
[   12.958818] A critical section may have been left unterminated.

I guess in the following sequence:

#define hard_preempt_disable()  \
({  \
unsigned long __flags__;\
__flags__ = hard_local_irq_save();  \
if (__ipipe_root_p) \
preempt_disable();  \
__flags__;  \
})

preempt_disable() gets called with the head domain stalled, while
current domain is root, and ipipe_root_only() treats this as an
error.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-18 Thread Gilles Chanteperdrix
On Sat, Jun 18, 2016 at 03:21:40PM +0200, Philippe Gerum wrote:
> On 06/18/2016 03:08 PM, Gilles Chanteperdrix wrote:
> > Hi Philippe,
> > 
> > it seems since I-pipe commit
> > b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
> > Revert "ipipe: Register function tracer for direct and exclusive 
> > invocation"
> > 
> > This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.
> > 
> > We now have an I-pipe-compatible dispatching function for ftrace.
> > 
> > The ftrace dispatching function causes the following warning at
> > boot on x86_32 with all warnings/debugs enabled:
> > [4.730812] I-pipe: head domain Xenomai registered.
> > [4.737967] I-pipe: Detected illicit call from head domain 'Xenomai'
> > [4.737967] into a regular Linux service
> > 
> > Because it calls preempt_disable(), which is not safe to be called
> > form root domain, when runnning over 2.6.x on an architecture such
> > as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.
> > 
> > Should we make the ftrace dispatching function really I-pipe
> > compatible by calling ipipe_preempt_disable() in that case instead?
> > or should we make the patch revert conditional to !IPIPE_LEGACY or
> > IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
> > tracer work in that case).
> > 
> 
> I would go for the change which has the lesser impact on the mainline
> code; that would be option #1.

Ok. Something else. Commit fdb5d54d04b8c3b6b6a6ad7ac2b6248cf0b415e0:
ipipe: Avoid rescheduling from __ftrace_ops_list_func in illegal contexts

The ftrace callback dispatcher may be called with hard irqs disabled or
even over the head domain. We cannot allow any rescheduling in that
case, but also do not have: only non-urgent events are expected to be
kicked-off from ftrace callbacks, and those will at latest be handled on
the next Linux timer tick.

Added the following code to the ftrace dispatch function:
+#ifdef CONFIG_IPIPE
+   if (hard_irqs_disabled() || !__ipipe_root_p)
+   /*
+* Nothing urgent to schedule here. At latest the timer tick
+* will pick up whatever the tracing functions kicked off.
+*/
+   preempt_enable_no_resched_notrace();
+   else
+#endif
+   preempt_enable_notrace();

Should not this go to the generic definition of preempt_enable() ? I
mean if in the !LEGACY case, it is now legal to call
preempt_enable() over non root contexts or with hard irqs off, so,
does it really make sense to fix all the preempt_enable() spots one
by one?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


[Xenomai] ftrace dispatch function broken on 2.6.5

2016-06-18 Thread Gilles Chanteperdrix
Hi Philippe,

it seems since I-pipe commit
b115c4094d734e19fa7a96be1bf3958b3d244b8b on the ipipe-3.18 branch:
Revert "ipipe: Register function tracer for direct and exclusive invocation"

This reverts commit e00888b4aae45d9b84698a62079dde14c9be5fd3.

We now have an I-pipe-compatible dispatching function for ftrace.

The ftrace dispatching function causes the following warning at
boot on x86_32 with all warnings/debugs enabled:
[4.730812] I-pipe: head domain Xenomai registered.
[4.737967] I-pipe: Detected illicit call from head domain 'Xenomai'
[4.737967] into a regular Linux service

Because it calls preempt_disable(), which is not safe to be called
form root domain, when runnning over 2.6.x on an architecture such
as x86_32 which does not have IPIPE_HAVE_SAFE_THREAD_INFO.

Should we make the ftrace dispatching function really I-pipe
compatible by calling ipipe_preempt_disable() in that case instead?
or should we make the patch revert conditional to !IPIPE_LEGACY or
IPIPE_HAVE_SAFE_THREAD_INFO (but that would make only the I-pipe
tracer work in that case).

Regards.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] need help to Xenomai Cobalt RTnet socket UDP recvfrom non blocking

2016-06-17 Thread Gilles Chanteperdrix
On Fri, Jun 17, 2016 at 12:07:28PM +0200, Gilles Chanteperdrix wrote:
> On Fri, Jun 17, 2016 at 11:55:20AM +0200, Laurent LEQUIEVRE wrote:
> > Hello,
> > 
> > I installed xenomai 3.0.2 and try to test some RTnet features needed for 
> > the communication with my Kuka robot arm.
> > 
> > I specify that I want to work with the skin posix only.
> > 
> > I need to create a UDP socket with a timeout on the 'recvfrom' function 
> > (Non blocking recvfrom).
> > 
> > I tested this first code :
> > 
> > int udp_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
> > bind(udp_socket, ...)
> > struct timeval timeout;
> > timeout.tv_sec = 1;
> > timeout.tv_usec = 0;
> > 
> > setsockopt(udp_socket, SOL_SOCKET, SOCKET_SO_RCVTIMEO, &timeout, 
> > sizeof(timeout));
> > --> this function return -1, errno=92, Protocol not available ???
> 
> This socket option is not implemented in RTnet sockets. You can use
> the RTNET_RTIOC_TIMEOUT ioctl instead.
> 
> 
> > 
> > recvfrom(udp_socket, ) --> blocked ??
> > 
> > I tested this second code :
> > 
> > int udp_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
> > bind(udp_socket, ...)
> > fcntl(udp_socket, F_SETFL, O_NONBLOCK);
> > --> this function return 0
> > 
> > recvfrom(udp_socket, ) --> blocked ??
> 
> RTnet UDP recvfrom does not check the O_NONBLOCK flag, you can use
> recvmsg with MSG_DONTWAIT instead.

You can also use the MSG_DONTWAIT flag in recvfrom.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] need help to Xenomai Cobalt RTnet socket UDP recvfrom non blocking

2016-06-17 Thread Gilles Chanteperdrix
On Fri, Jun 17, 2016 at 11:55:20AM +0200, Laurent LEQUIEVRE wrote:
> Hello,
> 
> I installed xenomai 3.0.2 and try to test some RTnet features needed for 
> the communication with my Kuka robot arm.
> 
> I specify that I want to work with the skin posix only.
> 
> I need to create a UDP socket with a timeout on the 'recvfrom' function 
> (Non blocking recvfrom).
> 
> I tested this first code :
> 
> int udp_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
> bind(udp_socket, ...)
> struct timeval timeout;
> timeout.tv_sec = 1;
> timeout.tv_usec = 0;
> 
> setsockopt(udp_socket, SOL_SOCKET, SOCKET_SO_RCVTIMEO, &timeout, 
> sizeof(timeout));
> --> this function return -1, errno=92, Protocol not available ???

This socket option is not implemented in RTnet sockets. You can use
the RTNET_RTIOC_TIMEOUT ioctl instead.


> 
> recvfrom(udp_socket, ) --> blocked ??
> 
> I tested this second code :
> 
> int udp_socket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
> bind(udp_socket, ...)
> fcntl(udp_socket, F_SETFL, O_NONBLOCK);
> --> this function return 0
> 
> recvfrom(udp_socket, ) --> blocked ??

RTnet UDP recvfrom does not check the O_NONBLOCK flag, you can use
recvmsg with MSG_DONTWAIT instead.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai/Cobalt on i7-3770S CPU

2016-06-16 Thread Gilles Chanteperdrix
On Thu, Jun 16, 2016 at 03:51:17PM +, Heinick, J Michael wrote:
> problem, the stripped down driver would hang the core2 computer
> just like the i7 computer.

Are you running a graphic system? If yes, have you tried to plug a
serial console or netconsole to try and retrieve the kernel console
when the hang happens? Because if the hang is in fact a kernel oops,
the oops message may indicate what the problem is.

> Eventually, I noticed that only nrt
> handlers were specified in the ops structure. After moving our
> ioctl functions to .ioctl_rt, both the stripped down driver and
> the full driver run on both the core2 and i7 computers.

You can not call "sleeping" services from an _nrt handler. If that
is what you were doing.

> Why the core2 computers appeared to work with only the ioctl_nrt
> handler specified in the full driver and hang the i7 computer is
> still a mystery to us.

Did the two machines run with the same kernel? Or were there
differences in the kernel configuration? Like I-pipe checks
disabled/enabled?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] SIGDEBUG_RESCNT_IMBALANCE with recursive mutex

2016-06-16 Thread Gilles Chanteperdrix
On Wed, Jun 01, 2016 at 12:58:20PM -0400, Jeffrey Melville wrote:
> On 6/1/2016 12:45 PM, Gilles Chanteperdrix wrote:
> > On Wed, Jun 01, 2016 at 11:07:35AM -0400, Jeffrey Melville wrote:
> >> Hi,
> >>
> >> Setup: Xenomai 2.6.4 (actually 2.6 git rev 4f349cf0553, with a99426
> >> cherry-picked) with kernel 3.14.17 on a Zynq and the POSIX skin using
> >> the ipipe patches included with the specified git rev)
> >>
> >> We've noticed that SIGDEBUG_RESCNT_IMBALANCE is generated when a
> >> (Xenomai) mutex is taken recursively by an NRT thread. The snippet at
> >> the bottom of this message will reproduce the issue. I omitted most of
> >> the error-checking for brevity.
> >>
> >> A couple previous threads have discussed slightly similar problems, but
> >> I never saw final resolutions:
> >> http://www.xenomai.org/pipermail/xenomai/2012-January/025278.html
> >> http://www.xenomai.org/pipermail/xenomai/2014-October/031919.html
> >>
> >> As far as "why are we doing this?", the problem area occurs in a test
> >> suite where some tests have to run as NRT threads because they don't
> >> have to run real-time and will get killed by the watchdog if they run as
> >> RT threads. Removing the Xenomai wrappers would also be complicated for
> >> reasons that are outside of the scope of this email.
> > 
> > Yeah well, this is an issue that has been known and fixed for so
> > long that I forgot we knew it:
> > 
> > https://git.xenomai.org/xenomai-3.git/commit/?id=79f0dd1cdc408b22afe301fa03805349a4a9f151
> > 
> > I will try and backport this change to 2.6 in 2.6.5. The change
> > should be easy to backport since it was made prior to most of the
> > cleanup of mutex and condvars. In 3.x, the kernel does not handle at
> > all the recursive mutex recursion count, this makes things much
> > simpler, but the code very different.
> > 
> Ok thanks. You can disregard my other email then. I'll keep an eye out
> for the fix and we will avoid that test case for the time being.
> 
> At some point after 2.6.5 I know we'll have to migrate to 3.x...

Hi,

I have pushed a fix for this issue:
https://git.xenomai.org/xenomai-2.6.git/commit/?id=8047147aff9dee9529f5561ecd7afc29c48d14db

Could you test it on your side?

Regards.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Resources for Installation of Xenomai 2.6 and 3 on iMx6 Sabre Lite Dev Board

2016-06-15 Thread Gilles Chanteperdrix
On Wed, Jun 15, 2016 at 12:54:35PM +, Sripath Roy Koganti wrote:
> I tried to install the following way
> 
> 
> Added this repo
> 
> add this repo
> deb [arch=armhf] http://repos.rcn-ee.com/debian/ jessie main
> 
> sudo apt-get install linux-image-3.8.13-xenomai-r78 
> linux-headers-3.8.13-xenomai-r78 linux-firmware-image-3.8.13-xenomai-r78
> 
> wget https://xenomai.org/downloads/xenomai/stable/xenomai-
> 2.6.4.tar.bz2
> tar xvjf xenomai-2.6.4.tar.bz2
> cd xenomai-2.6.4
> ./configure
> make
> sudo make install
> 
> 
> after that i tried to run apps from /usr/xenomai/bin  i got this
> 
> 
> Xenomai: native skin or CONFIG_XENO_OPT_PERVASIVE disabled.
> (modprobe xeno_native?)

So, you changed the procedure again? Can you not finish just once
what you started? I am sorry, but I do not know that debian
repository and what it contains exactly, maybe it has a website with
some information?

Other than that, that error is documented in the troubleshooting
guide:
https://xenomai.org/troubleshooting-a-dual-kernel-configuration/#native_skin_or_CONFIG_XENO_OPT_PERVASIVE_disabled

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Reg: Reg: Linux assertion with debug info

2016-06-15 Thread Gilles Chanteperdrix
On Wed, Jun 15, 2016 at 12:57:54PM +0530, Sureshvs wrote:
>  Hi Gilles
> 
>
>We have enabled the debug info and attached the Assertion Log
>files for your reference.We are getting this issue during
>Ethernet operation(ssh form remote system,ethernet socket
>operation etc.).
> 
> We understood your concern Linux and xenomai version is old but
> our system is in the final stage of testing.so kindly helps us to
> resolve the issue as soon as possible.

I am afraid I can not. Since you are using such an old version,
chances are very high that that issue has been fixed already. I
suggest you browse the I-pipe git, in order to see when and how.
Using the mailing list archives may be of help too.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM syscalls & switching

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 06:59:48PM +0200, Jan Kiszka wrote:
> On 2016-06-14 18:42, Gilles Chanteperdrix wrote:
> > The original point of my mail was that your assertion that "Linux
> > I/O syscalls cause ping pong" is false. How Linux syscalls work, I/O
> > or otherwise, with Xenomai, has always been the same.
> 
> Yes, s/syscall/library call/ if you want to be that precise. No one
> calls syscalls directly from the application, but the standard POSIX
> function we are talking about are not doing much more than that. That's
> what matters.

Yes and /Linux/Xenomai/ because these library calls are implemented
in Xenomai libraries, not in glibc. Whereas glibc library calls
branch almost unambiguously on Linux syscalls, Xenomai library calls
can either branch to Xenomai syscall or to Linux syscall. So,
calling Xenomai library calls "Linux I/O syscalls" is very
misleading.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : cobalt/kernel: Allow to restart clock_nanosleep and select after signal processing

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 09:43:09PM +0200, Jan Kiszka wrote:
> On 2016-06-14 21:24, Gilles Chanteperdrix wrote:
> > On Tue, Jun 14, 2016 at 09:09:22PM +0200, Jan Kiszka wrote:
> >>> This, of course, means an ABI change, so would only go in 3.1.x, but
> >>> is not changing the behaviour of nanosleep with signals an ABI
> >>> change already?
> >>
> >> An ABI fix - to comply with POSIX again.
> > 
> > The point is: if an application relies on the current behaviour, we
> > are breaking it, and this should not happen in the current stable
> > branch.
> 
> If an application relies on a POSIX violation, primarily while that
> application is being debugged or otherwise externally stopped? Err, well...
> 
> Anyway, I don't mind that much if it goes to stable, helping people with
> gdb, or if we keep it here until we move on to 3.1.
> 
> > 
> >>>>> consequences with that. Also the case you handle is a corner case,
> >>>>> and with your patch, handling that corner case ends-up taking the
> >>>>> majority of the nanosleep code. And finally, maybe some changes
> >>>>> could be moved in the I-pipe patch, if that helps (reading the
> >>>>> commit message, I believe it could help).
> >>>>>
> >>>>> So, all and all, I do not think this patch is acceptable as is.
> >>>>>
> >>>>
> >>>> Again, I'm open for better, concrete, less invasive, whatever design
> >>>> proposals.
> >>>>
> >>>> The code solves a generic problem that we share with Linux in a way that
> >>>> is very similar to Linux, even reuses a lot of Linux so that our I-pipe
> >>>> patch doesn't grow, and that even per arch. It may not look very
> >>>> friendly, but neither does the Linux code.
> >>>
> >>> For such a core, generic feature as syscall restarting, I see no
> >>> problem with moving some stuff to the I-pipe patch.
> >>
> >> I do: patch maintenance.
> > 
> > Well ok, but having almost the same code that does almost the same
> > thing but not really in two different syscalls can be a pain to
> > maintain too.
> 
> There are subtle differences (even less than under Linux), and it's only
> two syscalls with a pretty high probability that they won't get more
> (see Linux).

That is not the issue. The issue is that if the implementation you
propose has some flaw (which, as the Xenomai project history shows,
is often the case, and I am not pointing any finger here, this is
true for the implementations I propose too), we will have to fix it
in two places, taking the "subtle differences" into account, that
is, if we remember that the same thing is implemented in two places.
A recipe for trouble. Up to now, we managed to avoid duplicating
code, I would prefer to continue doing it.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM syscalls & switching

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 10:11:30PM +0200, Philippe Gerum wrote:
> On 06/14/2016 07:13 PM, Jan Kiszka wrote:
> > On 2016-06-14 17:23, Philippe Gerum wrote:
> >> Restoring the original behavior unconditionally would not be a fix but
> >> only a work-around for your own issue. Finding a better way acceptable
> >> to all parties is on my todo list for the upcoming 3.0.3.
> > 
> > An alternative design to a plain revert of the current->conforming
> > switch could be to enhance conforming to take the scheduling class into
> > account: SCHED_WEAK and SCHED_OTHER should have a NRT as conforming
> > domain while real-time scheduling classes obviously target RT. But I
> > didn't check yet what side effects that may have, nor if there could be
> > relevant impact on syscall performance (unlikely, though).
> > 
> 
> That would not make more sense: SCHED_WEAK/OTHER is about having
> _Xenomai_ threads interfacing with the _Xenomai_ system, without
> competing with real-time threads priority-wise. But this is still about
> running Xenomai services, for synchronizing on real-time events or
> receiving messages from the real-time side. Basically, this requires to
> run from primary mode.
> 
> We need a scheme that:
> 
> - does not allow user-space to select the RTDM handler being called only
> by manipulating its runtime mode, because this would certainly lead to a
> massive mess for the application.
> 
> - does not ask the driver to deal with mode detection on each and every
> (ioctl) request it implements. With the conforming mode applicable to
> all RTDM I/O calls, only the _rt side should return ENOSYS to hand over
> some requests to the nrt side. When a request cannot be processed from
> nrt, then we know the request is wrong/invalid, returning ENOSYS makes
> no sense.
> 
> - possibly restrict the former "current" behavior to some ioctl() calls,
> as specified by the driver, not decided by the application.

I am not sure I have already proposed that, but I am going to try,
sorry if this is the second time. Since RTDM file descriptors are
plain Linux file descriptors, could not the ioctl callback of these
plain linux file descriptors give access to the ioctl_nrt RTDM
callback? This way, in the case which is a problem for Jan, he could
call __real_ioctl and be done with it.

The fact that __real is not portable is not really an issue, you can
wrap that in a macro that adds __real or not like the __RT macro
does.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : cobalt/kernel: Allow to restart clock_nanosleep and select after signal processing

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 09:09:22PM +0200, Jan Kiszka wrote:
> > This, of course, means an ABI change, so would only go in 3.1.x, but
> > is not changing the behaviour of nanosleep with signals an ABI
> > change already?
> 
> An ABI fix - to comply with POSIX again.

The point is: if an application relies on the current behaviour, we
are breaking it, and this should not happen in the current stable
branch.

> >>> consequences with that. Also the case you handle is a corner case,
> >>> and with your patch, handling that corner case ends-up taking the
> >>> majority of the nanosleep code. And finally, maybe some changes
> >>> could be moved in the I-pipe patch, if that helps (reading the
> >>> commit message, I believe it could help).
> >>>
> >>> So, all and all, I do not think this patch is acceptable as is.
> >>>
> >>
> >> Again, I'm open for better, concrete, less invasive, whatever design
> >> proposals.
> >>
> >> The code solves a generic problem that we share with Linux in a way that
> >> is very similar to Linux, even reuses a lot of Linux so that our I-pipe
> >> patch doesn't grow, and that even per arch. It may not look very
> >> friendly, but neither does the Linux code.
> > 
> > For such a core, generic feature as syscall restarting, I see no
> > problem with moving some stuff to the I-pipe patch.
> 
> I do: patch maintenance.

Well ok, but having almost the same code that does almost the same
thing but not really in two different syscalls can be a pain to
maintain too.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : cobalt/kernel: Allow to restart clock_nanosleep and select after signal processing

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 07:28:22PM +0200, Jan Kiszka wrote:
> On 2016-06-14 19:04, Gilles Chanteperdrix wrote:
> > On Tue, Jun 14, 2016 at 06:11:19PM +0200, Jan Kiszka wrote:
> >> On 2016-06-14 17:57, Gilles Chanteperdrix wrote:
> >>> On Fri, May 27, 2016 at 08:36:43AM +0200, git repository hosting wrote:
> >>>> diff --git a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h 
> >>>> b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> >>>> index 060ce85..0f9ab14 100644
> >>>> --- a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> >>>> +++ b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> >>>> @@ -133,4 +133,10 @@ devm_hwmon_device_register_with_groups(struct 
> >>>> device *dev, const char *name,
> >>>>  #error "Xenomai/cobalt requires Linux kernel 3.10 or above"
> >>>>  #endif /* < 3.10 */
> >>>>  
> >>>> +#if LINUX_VERSION_CODE < KERNEL_VERSION(4,0,0)
> >>>> +#define cobalt_get_restart_block(p) 
> >>>> (&task_thread_info(p)->restart_block)
> >>>> +#else
> >>>> +#define cobalt_get_restart_block(p) (&(p)->restart_block)
> >>>> +#endif
> >>>> +
> >>>
> >>> This is bad. First off as explained in the comment heading
> >>> wrappers.h the wrappers are ordered by kernel version and the most
> >>> recent is first. Second, no other wrapper has a #else clause, the
> >>> idea is that we want to be able to remove some old wrappers from
> >>> time to time, and removing the #if completely should be enough.
> >>> Obviously, if you put a #else, this does not work. I agree that in
> >>> that case it is going to be hard, but please try anyway...
> >>
> >> I'm open for concrete ideas.
> > 
> > There are examples in wrapper.h, with COBALT_BACKPORT. Maybe this
> > can be used? The thing is that wrappers.h is supposed to contain
> > implementation of new services for older kernels;
> > cobalt_get_restart_block does not fit that definition.
> 
> Generally a good pattern, but the problem here is that the location of
> the structure completely changed. If there were an accessor for the
> struct in newer kernels, we could use and warp that. But also upstream
> does direct access, and I had to introduce this particular accessor for
> backporting purposes.

Yes, OK, but for instance, you could put the definition for old
kernels in wrappers.h then in another header (like, says,
ancillaries.h), include wrappers.h and if the symbol has not be
defined in wrappers.h, define it in ancillaries.h. This way, it will
continue to work if we carelessly remove the #ifdef in wrappers.h.
This will also make things a bit easier to maintain if the way to
access the restart_block changes again. Basically, constructs like:

#if version > 4.0
definition 1
#elif version > 3.99
definition 2
#else
definition 3
#endif

Will not work with the new wrappers.h organization. Because we want
a #ifdef for each kernel version. So that they can be put in
descending kernel versions order.


> > Ok, but much of the code you add runs under nklock, I am not sure I
> 
> Just as the code tells you: everything that was under nklock before,
> still is (+ some additional time calculation for nanosleep), and
> everything that was not is also not with the patch.
> 
> What are your concerns?

My concern is: the XNRESTART bit or the restart block contents are
not going to be modified under the feet of the thread calling
nanosleep, these are synchronously set by that same thread, so there
is no reason, a priori, for that test and the timeout calculation to
be under nklock. And in fact, I do not see why this could not be
done prior to calling nanosleep, by the cobalt core.

Another solution, is for the kernel side to only implement an
absolute delay (so only clock_nanosleep(TIMER_ABSTIME)) and have the
user-space take care of the rest. This way, the syscall can be
restarted without having to recalculate its arguments, and in fact,
the computation of the remaining time in case the syscall is
interrupted by a signal will be more accurate if done in user-space.
The same goes for select. Since in 3.x we do not have to support the
POSIX API in kernel space, there is no reason to want to implement
POSIX completely in kernel space.

This, of course, means an ABI change, so would only go in 3.1.x, but
is not changing the behaviour of nanosleep with signals an ABI
change already?

> > see the use for this. Also, this code reuses an existing status bit
> > (XNLBALERT) for a different purpose, I foresee unforeseen
> 
> No, it intro

Re: [Xenomai] [Xenomai-git] Jan Kiszka : cobalt/kernel: Allow to restart clock_nanosleep and select after signal processing

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 06:11:19PM +0200, Jan Kiszka wrote:
> On 2016-06-14 17:57, Gilles Chanteperdrix wrote:
> > On Fri, May 27, 2016 at 08:36:43AM +0200, git repository hosting wrote:
> >> diff --git a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h 
> >> b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> >> index 060ce85..0f9ab14 100644
> >> --- a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> >> +++ b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> >> @@ -133,4 +133,10 @@ devm_hwmon_device_register_with_groups(struct device 
> >> *dev, const char *name,
> >>  #error "Xenomai/cobalt requires Linux kernel 3.10 or above"
> >>  #endif /* < 3.10 */
> >>  
> >> +#if LINUX_VERSION_CODE < KERNEL_VERSION(4,0,0)
> >> +#define cobalt_get_restart_block(p)   
> >> (&task_thread_info(p)->restart_block)
> >> +#else
> >> +#define cobalt_get_restart_block(p)   (&(p)->restart_block)
> >> +#endif
> >> +
> > 
> > This is bad. First off as explained in the comment heading
> > wrappers.h the wrappers are ordered by kernel version and the most
> > recent is first. Second, no other wrapper has a #else clause, the
> > idea is that we want to be able to remove some old wrappers from
> > time to time, and removing the #if completely should be enough.
> > Obviously, if you put a #else, this does not work. I agree that in
> > that case it is going to be hard, but please try anyway...
> 
> I'm open for concrete ideas.

There are examples in wrapper.h, with COBALT_BACKPORT. Maybe this
can be used? The thing is that wrappers.h is supposed to contain
implementation of new services for older kernels;
cobalt_get_restart_block does not fit that definition.

> 
> > 
> > Other than that, I see your patch modifies each syscall handler
> > directly, can not the result be achieved in a different way? For
> > instance by factoring it in the core? Relying more on Linux syscall
> > restart mechanism.
> 
> Linux does it similarly, i.e. requires modifications on a per-syscall
> basis. As the logic is widely syscall-specific, I also don't see a
> generic way under Xenomai either. The good news is that all cases that
> Linux covers (minus futexes which we don't have) are already implemented
> in the patch, thus this shouldn't spread.

Ok, but much of the code you add runs under nklock, I am not sure I
see the use for this. Also, this code reuses an existing status bit
(XNLBALERT) for a different purpose, I foresee unforeseen
consequences with that. Also the case you handle is a corner case,
and with your patch, handling that corner case ends-up taking the
majority of the nanosleep code. And finally, maybe some changes
could be moved in the I-pipe patch, if that helps (reading the
commit message, I believe it could help).

So, all and all, I do not think this patch is acceptable as is.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM syscalls & switching

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 06:25:43PM +0200, Jan Kiszka wrote:
> On 2016-06-14 18:12, Gilles Chanteperdrix wrote:
> > On Tue, Jun 14, 2016 at 06:03:20PM +0200, Jan Kiszka wrote:
> >> On 2016-06-14 17:51, Gilles Chanteperdrix wrote:
> >>>> Sorry, there *is*: Shadowed thread (anything created by wrapped
> >>>> pthread_create) calls, say, read() on some Linux file descriptor, read()
> >>>> is wrapped, first probes the call on RTDM, which means migration to RT
> >>>> (for currently relaxed threads, like SCHED_WEAK), no RTDM match in the
> >>>> kernel, and finally the migration to NRT in order to do the Linux read()
> >>>> syscall. That didn't happen with the original design.
> >>>
> >>> The wrapped read does not get ping-pong when calling the Linux I/O
> >>> syscalls. The term syscall means something very precise, and
> >>> __wrap_read is not a syscall. It gets ping-pong because it calls
> >>> RTDM I/O. But if you call directly Linux I/O syscall, with say
> >>> __real_read, you do not get ping-pong. Linux I/O syscall work as
> >>> they have always have: they require xenomai threads to run in
> >>> secondary mode and will cause them to switch to secondary mode to
> >>> handle the syscall.
> >>
> >> __real_* are a non-portable workarounds for special cases. It's not what
> >> Xenomai developers are supposed to use in their applications.
> > 
> > My argument is that you do not get ping-pong with Linux I/O
> > syscalls. And whether __real_* is portable or not does not change
> > that fact.
> 
> ...or by not doing eager migration.

Once again: there is no eager migration to primary mode for Linux
I/O syscalls, this would not make any sense. Only eager migration to
secondary mode, and this has always been like that, nothing changed
with 3.x.

> > Well, actually, I have been thinking about doing exactly that when
> > not wrapping: when the application has to call __RT(read) to call
> > Xenomai read, there is no reason for this call to fall back to Linux
> > read call. The idea came to late, but who knows, I may use it for a
> > later version.
> 
> Sure, such ideas existed a long time ago already, but then we rather
> moved towards seamless integration with Linux and easy porting to/from
> Xenomai. If basic things like I/O already requires special care, then we
> can indeed go back directly to requiring explicit call tagging. Not
> portable either, though a tag like __RT() can be more easily defined
> away on native real-time.
> 
> That said, such a requirement will not make developers of existing
> large, complex, layered applications happy. Like in our case.

I have some experience porting POSIX applications to Xenomai, and
when I did it I knew precisely at each point in the code whether the
underlying thread was a Xenomai thread, or a Linux thread. It looks
to me like a pre-requisite for programming applications based on
dual-kernels. So, when you say "large, complex, layered
application", I just hear "badly designed". The fallback in the
"wrapped read" is to allow plain Linux threads to be able to
continue to use read, not for Xenomai threads running in secondary
mode to have a high-performance Linux read, this use of read is a
corner case, and getting that corner case to work like it worked in
2.x had really bad side effects. But this is not the point of my
e-mail at all. I will let Philippe work with you on this issue.

The original point of my mail was that your assertion that "Linux
I/O syscalls cause ping pong" is false. How Linux syscalls work, I/O
or otherwise, with Xenomai, has always been the same.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM syscalls & switching

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 06:03:20PM +0200, Jan Kiszka wrote:
> On 2016-06-14 17:51, Gilles Chanteperdrix wrote:
> >> Sorry, there *is*: Shadowed thread (anything created by wrapped
> >> pthread_create) calls, say, read() on some Linux file descriptor, read()
> >> is wrapped, first probes the call on RTDM, which means migration to RT
> >> (for currently relaxed threads, like SCHED_WEAK), no RTDM match in the
> >> kernel, and finally the migration to NRT in order to do the Linux read()
> >> syscall. That didn't happen with the original design.
> > 
> > The wrapped read does not get ping-pong when calling the Linux I/O
> > syscalls. The term syscall means something very precise, and
> > __wrap_read is not a syscall. It gets ping-pong because it calls
> > RTDM I/O. But if you call directly Linux I/O syscall, with say
> > __real_read, you do not get ping-pong. Linux I/O syscall work as
> > they have always have: they require xenomai threads to run in
> > secondary mode and will cause them to switch to secondary mode to
> > handle the syscall.
> 
> __real_* are a non-portable workarounds for special cases. It's not what
> Xenomai developers are supposed to use in their applications.

My argument is that you do not get ping-pong with Linux I/O
syscalls. And whether __real_* is portable or not does not change
that fact.

> 
> If it were, we could also give up on wrapping and have rt_task_create
> etc. again, ie. separate APIs for RT and non-RT.

Well, actually, I have been thinking about doing exactly that when
not wrapping: when the application has to call __RT(read) to call
Xenomai read, there is no reason for this call to fall back to Linux
read call. The idea came to late, but who knows, I may use it for a
later version.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : cobalt/kernel: Allow to restart clock_nanosleep and select after signal processing

2016-06-14 Thread Gilles Chanteperdrix
On Fri, May 27, 2016 at 08:36:43AM +0200, git repository hosting wrote:
> diff --git a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h 
> b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> index 060ce85..0f9ab14 100644
> --- a/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> +++ b/kernel/cobalt/include/asm-generic/xenomai/wrappers.h
> @@ -133,4 +133,10 @@ devm_hwmon_device_register_with_groups(struct device 
> *dev, const char *name,
>  #error "Xenomai/cobalt requires Linux kernel 3.10 or above"
>  #endif /* < 3.10 */
>  
> +#if LINUX_VERSION_CODE < KERNEL_VERSION(4,0,0)
> +#define cobalt_get_restart_block(p)  (&task_thread_info(p)->restart_block)
> +#else
> +#define cobalt_get_restart_block(p)  (&(p)->restart_block)
> +#endif
> +

This is bad. First off as explained in the comment heading
wrappers.h the wrappers are ordered by kernel version and the most
recent is first. Second, no other wrapper has a #else clause, the
idea is that we want to be able to remove some old wrappers from
time to time, and removing the #if completely should be enough.
Obviously, if you put a #else, this does not work. I agree that in
that case it is going to be hard, but please try anyway...

Other than that, I see your patch modifies each syscall handler
directly, can not the result be achieved in a different way? For
instance by factoring it in the core? Relying more on Linux syscall
restart mechanism.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] RTDM syscalls & switching

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 05:43:45PM +0200, Jan Kiszka wrote:
> On 2016-06-14 17:38, Gilles Chanteperdrix wrote:
> > On Tue, Jun 14, 2016 at 05:27:54PM +0200, Jan Kiszka wrote:
> >> On 2016-06-14 17:23, Philippe Gerum wrote:
> >>> On 06/14/2016 05:09 PM, Jan Kiszka wrote:
> >>>> On 2016-05-13 17:32, Jan Kiszka wrote:
> >>>>> On 2016-05-13 15:38, Philippe Gerum wrote:
> >>>>>> On 05/13/2016 07:54 AM, Jan Kiszka wrote:
> >>>>>>> On 2016-05-13 00:26, Philippe Gerum wrote:
> >>>>>>>> On 05/12/2016 09:27 PM, Jan Kiszka wrote:
> >>>>>>>>> On 2016-05-12 21:08, Philippe Gerum wrote:
> >>>>>>>>>> On 05/12/2016 08:42 PM, Jan Kiszka wrote:
> >>>>>>>>>>> On 2016-05-12 20:35, Philippe Gerum wrote:
> >>>>>>>>>>>> On 05/12/2016 08:24 PM, Jan Kiszka wrote:
> >>>>>>>>>>>>> On 2016-05-12 20:20, Gilles Chanteperdrix wrote:
> >>>>>>>>>>>>>> On Thu, May 12, 2016 at 07:17:15PM +0200, Jan Kiszka wrote:
> >>>>>>>>>>>>>>> On 2016-05-12 19:12, Gilles Chanteperdrix wrote:
> >>>>>>>>>>>>>>>> On Thu, May 12, 2016 at 06:59:04PM +0200, Gilles 
> >>>>>>>>>>>>>>>> Chanteperdrix wrote:
> >>>>>>>>>>>>>>>>> On Thu, May 12, 2016 at 06:50:03PM +0200, Jan Kiszka wrote:
> >>>>>>>>>>>>>>>>>> On 2016-05-12 18:31, Gilles Chanteperdrix wrote:
> >>>>>>>>>>>>>>>>>>> On Thu, May 12, 2016 at 06:06:16PM +0200, Jan Kiszka 
> >>>>>>>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>>>>>>> Gilles,
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> regarding commit bec5d0dd42 (rtdm: make syscalls 
> >>>>>>>>>>>>>>>>>>>> conforming rather than
> >>>>>>>>>>>>>>>>>>>> current) - I remember a discussion on that topic, but I 
> >>>>>>>>>>>>>>>>>>>> do not find its
> >>>>>>>>>>>>>>>>>>>> traces any more. Do you have a pointer
> >>>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>>> In any case, I'm confronted with a use case for the old 
> >>>>>>>>>>>>>>>>>>>> (Xenomai 2),
> >>>>>>>>>>>>>>>>>>>> lazy switching behaviour: lightweight, performance 
> >>>>>>>>>>>>>>>>>>>> sensitive IOCTL
> >>>>>>>>>>>>>>>>>>>> services that can (and should) be called without any 
> >>>>>>>>>>>>>>>>>>>> switching from both
> >>>>>>>>>>>>>>>>>>>> domains.
> >>>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>> Why not using a plain linux driver? ioctl_nrt callbacks 
> >>>>>>>>>>>>>>>>>>> are
> >>>>>>>>>>>>>>>>>>> redundant with plain linux drivers.
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> Because that enforces the calling layer to either call the 
> >>>>>>>>>>>>>>>>>> same service
> >>>>>>>>>>>>>>>>>> via a plain Linux device if the calling thread is 
> >>>>>>>>>>>>>>>>>> currently relaxed or
> >>>>>>>>>>>>>>>>>> go for the RT device if the caller is in primary. Doable, 
> >>>>>>>>>>>>>>>>>> but I would
> >>

Re: [Xenomai] RTDM syscalls & switching

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 05:27:54PM +0200, Jan Kiszka wrote:
> On 2016-06-14 17:23, Philippe Gerum wrote:
> > On 06/14/2016 05:09 PM, Jan Kiszka wrote:
> >> On 2016-05-13 17:32, Jan Kiszka wrote:
> >>> On 2016-05-13 15:38, Philippe Gerum wrote:
> >>>> On 05/13/2016 07:54 AM, Jan Kiszka wrote:
> >>>>> On 2016-05-13 00:26, Philippe Gerum wrote:
> >>>>>> On 05/12/2016 09:27 PM, Jan Kiszka wrote:
> >>>>>>> On 2016-05-12 21:08, Philippe Gerum wrote:
> >>>>>>>> On 05/12/2016 08:42 PM, Jan Kiszka wrote:
> >>>>>>>>> On 2016-05-12 20:35, Philippe Gerum wrote:
> >>>>>>>>>> On 05/12/2016 08:24 PM, Jan Kiszka wrote:
> >>>>>>>>>>> On 2016-05-12 20:20, Gilles Chanteperdrix wrote:
> >>>>>>>>>>>> On Thu, May 12, 2016 at 07:17:15PM +0200, Jan Kiszka wrote:
> >>>>>>>>>>>>> On 2016-05-12 19:12, Gilles Chanteperdrix wrote:
> >>>>>>>>>>>>>> On Thu, May 12, 2016 at 06:59:04PM +0200, Gilles Chanteperdrix 
> >>>>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>>> On Thu, May 12, 2016 at 06:50:03PM +0200, Jan Kiszka wrote:
> >>>>>>>>>>>>>>>> On 2016-05-12 18:31, Gilles Chanteperdrix wrote:
> >>>>>>>>>>>>>>>>> On Thu, May 12, 2016 at 06:06:16PM +0200, Jan Kiszka wrote:
> >>>>>>>>>>>>>>>>>> Gilles,
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> regarding commit bec5d0dd42 (rtdm: make syscalls 
> >>>>>>>>>>>>>>>>>> conforming rather than
> >>>>>>>>>>>>>>>>>> current) - I remember a discussion on that topic, but I do 
> >>>>>>>>>>>>>>>>>> not find its
> >>>>>>>>>>>>>>>>>> traces any more. Do you have a pointer
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> In any case, I'm confronted with a use case for the old 
> >>>>>>>>>>>>>>>>>> (Xenomai 2),
> >>>>>>>>>>>>>>>>>> lazy switching behaviour: lightweight, performance 
> >>>>>>>>>>>>>>>>>> sensitive IOCTL
> >>>>>>>>>>>>>>>>>> services that can (and should) be called without any 
> >>>>>>>>>>>>>>>>>> switching from both
> >>>>>>>>>>>>>>>>>> domains.
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>> Why not using a plain linux driver? ioctl_nrt callbacks are
> >>>>>>>>>>>>>>>>> redundant with plain linux drivers.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> Because that enforces the calling layer to either call the 
> >>>>>>>>>>>>>>>> same service
> >>>>>>>>>>>>>>>> via a plain Linux device if the calling thread is currently 
> >>>>>>>>>>>>>>>> relaxed or
> >>>>>>>>>>>>>>>> go for the RT device if the caller is in primary. Doable, 
> >>>>>>>>>>>>>>>> but I would
> >>>>>>>>>>>>>>>> really like to avoid this pain for the users.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>>>> What were the arguments in favour of migrating threads to 
> >>>>>>>>>>>>>>>>>> real-time first?
> >>>>>>>>>>>>>>>>>>
> >>>>>>&g

Re: [Xenomai] Resources for Installation of Xenomai 2.6 and 3 on iMx6 Sabre Lite Dev Board

2016-06-14 Thread Gilles Chanteperdrix
On Tue, Jun 14, 2016 at 04:23:26AM +, Sripath Roy Koganti wrote:
> I am done with building a new kernel from BBB. but it took me more
> than 4 hours to complete the kernel compilation. 

Yes, that is the downside of compiling on target. Did you expect a
processor with such a low consumption to have the power of your PC?

> Now i have zImage and Image available in arch/arm/boot/ 
> unable to install the kernel

I have the same remark as before. You tell us something is wrong,
but you do not give us any information. We need to see the exact
error you get in order to be able to help you.

Anyway, what is needed to install the kernel depends on the
distribution you use, the bootloader you use etc... but the good
news is that installing the kernel with Xenomai included is not
different from installing a custom-built Linux kernel, so the
information you find on your board website apply.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] PowerPC hardware watchpoints unstable with GDB and Xenomai

2016-06-13 Thread Gilles Chanteperdrix
On Mon, Jun 13, 2016 at 03:14:05PM +, Lassi Niemistö wrote:
> > Whether or not you should contact the gdb mailing list is simple: if you 
> > get the issue without Xenomai, then it is a 
> > gdb issue, if you get the issue only with Xenomai, then it is a Xenomai 
> > issue, and there is no reason to bother the 
> > gdb mailing list.
> 
> Well, I had no idea which side contains the Xenomai support and I wished the 
> GDB guys to provide some debugging hints. But if you see it most probably as 
> a Xenomai issue, let's not cross-post it then.
> 
> 
> > Xenomai 2.6.1 is pretty old, a lot of things have been fixed since then, 
> > including issues with gdb. So, do you get the same issue with Xenomai 2.6.4?
> 
> I will find out if we have the possibility of trying the version
> update with sensible effort. 

The update should be painless: the two versions belong to the same
branch, they are ABI compatible. This means that you do not even
need to recompile your applications, you simply need to compile the
new xenomai version. Also, recompiling the kernel support should be
enough.

> 
> A thought: I have been getting "warning: Unable to find libthread_db matching 
> inferior's thread library, thread debugging will not be available." at GDB 
> startup, but ignored it this far as the threading has worked just fine.. 
> Might this have anything to do with my issue, and does anyone know how to get 
> rid of the message on Xenomai? I know how to specify the library for GDB, but 
> which one should be the correct for Xenomai?

No idea, I always debugged threaded programs with libthread_db.so,
and it has to come from the toolchain used to compile the target
gdb/gdbserver. And if using gdbserver, the cross-gdb running on host
and the gdbserver running on target should have the same version too.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] PowerPC hardware watchpoints unstable with GDB and Xenomai

2016-06-13 Thread Gilles Chanteperdrix
On Mon, Jun 13, 2016 at 02:01:10PM +, Lassi Niemistö wrote:
> Hello,
> 
> Note: please keep g...@sourceware.org as a recipient - I tried to post on 
> both but xenomai rejected due to missing membership and I do not want to 
> re-post on gdb.. 

Whether or not you should contact the gdb mailing list is simple: if
you get the issue without Xenomai, then it is a gdb issue, if you
get the issue only with Xenomai, then it is a Xenomai issue, and
there is no reason to bother the gdb mailing list.

> 
> I am trying to debug (applications, not kernel) using hardware watchpoints 
> (memory breakpoints) with 
> 
> GDB: 7.10.1 and 7.11 (local mode)
> Processor: PowerPC e300c3 (MPC8309E)
> Xenomai: 2.6.1

Xenomai 2.6.1 is pretty old, a lot of things have been fixed since
then, including issues with gdb. So, do you get the same issue with
Xenomai 2.6.4?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Resources for Installation of Xenomai 2.6 and 3 on iMx6 Sabre Lite Dev Board

2016-06-13 Thread Gilles Chanteperdrix
On Mon, Jun 13, 2016 at 08:52:11AM +, Sripath Roy Koganti wrote:
> Yes
> 
> I tried running directly from BBB. but got an error stating timer
> files not found. Now im trying an alternate way doing from my PC
> and cross compiling.

Do not waste your time doing that. Please report the problem you get
when compiling directly on the board, but clearly, more than just
"got an error stating timer files not found", at which point, during
compilation, at run-time? etc... Send the full compilation logs if
this is an error at compilation time, or the full kernel logs if
this is a run-time error.

Changing where the compiler run will probably not change anything.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Resources for Installation of Xenomai 2.6 and 3 on iMx6 Sabre Lite Dev Board

2016-06-13 Thread Gilles Chanteperdrix
On Mon, Jun 13, 2016 at 08:36:39AM +, Sripath Roy Koganti wrote:
> While installing the package in Jessie debian resulted no package found.
> 
> arm-none-linux-gnueabi-gcc

So, you are trying to cross compile?
See:
https://xenomai.org/installing-xenomai-3-x/#Cross-compilation

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Resources for Installation of Xenomai 2.6 and 3 on iMx6 Sabre Lite Dev Board

2016-06-10 Thread Gilles Chanteperdrix
On Fri, Jun 10, 2016 at 07:32:30AM +, Sripath Roy Koganti wrote:
> Should i do this process in a Linux Environment under the target
> board. Basically i am a newbie and its kindof looking complex. It
> would really help if the process to be done can be explained.

The process is explained on the page I sent you. Whether you
cross-compile or not is your choice, the procedure is almost the
same in both cases.

> 
> Let me explain what i am trying to do. I am deploying debian on
> Freescale with imX6 SOC Then im looking for packages in debian (if
> they are precompiled) or else i will follow the proces you
> mentioned from knowledge base. Can you suggest which would be
> better.

Your choice again. We have a page explaining how to build Debian
packages.
https://xenomai.org/2014/06/building-debian-packages/

My opinion on that is that building debian packages is more complex
than simply compiling and installing following the generic
instructions, so there are many more things that can go wrong. So, I
would only do it if I need the benefits of using debian packages:
being able to build the packages once, and deploy them on several
machines. But this is simply my opinion.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Resources for Installation of Xenomai 2.6 and 3 on iMx6 Sabre Lite Dev Board

2016-06-10 Thread Gilles Chanteperdrix
On Fri, Jun 10, 2016 at 06:33:32AM +, Sripath Roy Koganti wrote:
> Hi
> 
> 
> I need help with porting Xenomai on Sabre Lite Board with iMx6
> Quad. Can you specify the resources and process to be done.

There is nothing board-specific in Xenomai, only SOC specific
support. And since Xenomai has already been ported to imx6, you do
not need to port anything, you just need to install Xenomai. We
provide installation instructions here (not imx6-specific, but the
installation instructions are the same for any SOC).

For 3.x:
https://xenomai.org/installing-xenomai-3-x/

For 2.x:
https://xenomai.org/installing-xenomai-2-x/
(pay attention to the Caution at the beginning; in a nut shell, if
you are starting a new development, there is no reason to start with
2.x).

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai/Cobalt on i7-3770S CPU

2016-06-09 Thread Gilles Chanteperdrix
On Thu, Jun 09, 2016 at 05:32:51PM +, Heinick, J Michael wrote:
> 
> -Original Message-
> From: Gilles Chanteperdrix [mailto:gilles.chanteperd...@xenomai.org] 
> Sent: Wednesday, June 08, 2016 3:39 PM
> To: Heinick, J Michael
> Cc: xenomai@xenomai.org
> Subject: Re: [Xenomai] Xenomai/Cobalt on i7-3770S CPU
> 
> On Wed, Jun 08, 2016 at 09:42:26AM +, Heinick, J Michael wrote:
> > 
> > We currently have an RTDM driver that is running well on Xenomai/Cobalt 
> > 3.0.1 on 2 Dell computers with Core2 processors, but will hang 
> > (unresponsive mouse and keyboard, no discernable activity) an entire 
> > SuperLogics i7 computer with a Core i7-3770S processor (4 physical cores, 4 
> > logical cores).  The hang occurs in the ioctl function at an rtdm_sem_down 
> > call that waits for the interrupt handler to signal the handling of an 
> > interrupt.  We suspect that we have a problem with our kernel 
> > build/installation configuration options.  We have attempted to configure 
> > the Core i7-3770S system so that Xenomai/Cobalt only uses 2 cores like the 
> > other 2 working Core2 computers, but the system still hangs (more detail on 
> > the results of our attempt is included beow).  Eventually, we would like to 
> > configure Xenomai/Cobalt to run on 4 cores of the i7 computer if possible.
> > 
> > Any suggestions to help us make/install/configure Xenomai/Cobalt to run on 
> > the SuperLogics computer with the i7-3770S processor so that the 
> > rtdm_sem_down call in the RTDM driver does not hang the entire system would 
> > be appreciated.
> 
> This sounds like an irq conflict: a device handled by an RTDM driver can not 
> use the same irq line as a device handled by a plain Linux driver without 
> modifying the plain Linux driver. See FAQ for solutions to that problem.
> 
> -- 
>   Gilles.
> https://click-hack.org
> 
> 
> Thanks for the reply, Gilles.
> 
> Yes, there was a conflict on irq 16. We disabled the conflicting
> component so that now our driver is the only one on irq 16, but
> the i7-3770S system is still hanging. We know the interrupt
> handler is receiving the interrupts and handling them without
> hanging the i7 system. With the interrupts running at 20Hz, the
> handler stores counts of the second and sub-second interrupts that
> we can retrieve with a different ioctl call that does not wait for
> an interrupt. The hang only occurs in the ioctl call at the wait
> with the rtdm_sem_down call. The contents of selected files from
> the /proc directory are included below.


Could you post the simplest driver which generates this issue?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] My Device of xenomai 2.6.3 patch Issues ~!

2016-06-08 Thread Gilles Chanteperdrix
On Wed, Jun 08, 2016 at 08:40:23PM +0900, ���ɵ� wrote:
>  
> 
> Hi

Hi,

> 
>  
> 
> When the xenomai 2.6.3 patch on my device, there are some
> problems.

Xenomai is not a patch.

> The first problem.
> 
> When in use the application program ¡°clock_gettime¡± function quickly
> (about 5ms), the spinlock occurred in the kernel.
> 
> See kernel message file (¡°dmesg¡± command), file name =
> dmesg_spinlock.txt

I do not understand what the problem is, you do not really give us
enough detail, in particular, you say "when I use the application
program clock_gettime", but for me clock_gettime is not a program.
Anyway, do you get the same issues if you use:
- Xenomai 2.6.4
- a mainline Linux kernel

> A Second problem.
> 
> Using a "rt_mutex_create" function in the parent program when generating a
> "MyMutex", performs "rt_mutex_acquire", "rt_mutex_release" quickly, and the
> problem occurs once in 5-7 days.
> 
> Kernel messages of the problem. (¡°dmesg¡± command)

Once again: I have not understood your problem. The message you get
on the console is normal if an application exits, without a mutex
having been destroyed. And the message can be suppressed with the
kernel configuration.

Regards.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai/Cobalt on i7-3770S CPU

2016-06-08 Thread Gilles Chanteperdrix
On Wed, Jun 08, 2016 at 09:42:26AM +, Heinick, J Michael wrote:
> 
> We currently have an RTDM driver that is running well on Xenomai/Cobalt 3.0.1 
> on 2 Dell computers with Core2 processors, but will hang (unresponsive mouse 
> and keyboard, no discernable activity) an entire SuperLogics i7 computer with 
> a Core i7-3770S processor (4 physical cores, 4 logical cores).  The hang 
> occurs in the ioctl function at an rtdm_sem_down call that waits for the 
> interrupt handler to signal the handling of an interrupt.  We suspect that we 
> have a problem with our kernel build/installation configuration options.  We 
> have attempted to configure the Core i7-3770S system so that Xenomai/Cobalt 
> only uses 2 cores like the other 2 working Core2 computers, but the system 
> still hangs (more detail on the results of our attempt is included beow).  
> Eventually, we would like to configure Xenomai/Cobalt to run on 4 cores of 
> the i7 computer if possible.
> 
> Any suggestions to help us make/install/configure Xenomai/Cobalt to run on 
> the SuperLogics computer with the i7-3770S processor so that the 
> rtdm_sem_down call in the RTDM driver does not hang the entire system would 
> be appreciated.

This sounds like an irq conflict: a device handled by an RTDM driver
can not use the same irq line as a device handled by a plain Linux
driver without modifying the plain Linux driver. See FAQ for
solutions to that problem.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Installing Xenomai step 1 - patching

2016-06-07 Thread Gilles Chanteperdrix
On Wed, Jun 08, 2016 at 12:36:23AM +0200, ons jallouli wrote:
> Hi all,
> 
> I am a beginner in the Xenomai World  and I want to install Xenomai 3
> 
> I'm trying to patch the Linux kernel as described here:
> https://xenomai.org/installing-xenomai-3-x/
> 
> I get this error :
> prepare-kernel.sh: Unable to patch kernel 3.16.7-ckt27 with
> ipipe-core-3.16.7-x86-3.patch.
> 
> Some one can help me, please ?

This question has been answered many times on the mailing list, you
could have found the answer in the archives.

Anyway, the I-pipe patches are released for the mainline Linux
kernel. The kernel 3.16.7-ckt27 is the Linux kernel fork maintained
by the Canonical kernel team, has differences with the mainline
kernel and thus the patch do not apply to it. As has also been
repeated many times on this mailing list, using the mainline kernel
rather than forks is advised.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-07 Thread Gilles Chanteperdrix
On Tue, Jun 07, 2016 at 04:13:07PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-06 um 17:35 schrieb Gilles Chanteperdrix:
> > On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> >>> On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>>>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>>>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >>>>>>>> Dear all,
> >>>>>>>>
> >>>>>>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 3.0.43" 
> >>>>>>>> to
> >>>>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The 
> >>>>>>>> system
> >>>>>>>> is now up and running and works stable. Unfortunately we see a
> >>>>>>>> difference in the performance. Our old combination (XENOMAI 2.6.2.1 +
> >>>>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>>>
> >>>>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in our 
> >>>>>>>> old
> >>>>>>>> system.  Every call of xnpod_schedule_handler interrupts our main
> >>>>>>>> XENOMAI task with priority = 95.
> As I wrote above, I get interrupts 1037 handled by rthal_apc_handler()
> and 1038 handled by xnpod_schedule_handler() while my realtime task
> is running on kernel 3.10.53 with Xenomai 2.6.4.
> On kernel 3.0.43 with Xenomai 2.6.4 there are no interrupts, except the
> once that are send by my board using GPIOs, but this virtual interrupts
> are assigned to Xenomai and Linux as well but I didn't see a handler 
> installed.
> I'm pretty sure that these interrupts are slowing down my system, but
> where do they come from ?
> why didn't I see them on Kernel 3.0.43 with Xenomai 2.6.4 ?
> how long do they need to process ?

How do you mean you do not see them? If you are talking about the
rescheduling API, it used no to be bound to a virq (so, it would
have a different irq number on cortex A9, something between 0 and 31
that would not show in the usual /proc files), I wonder if 3.0 is
before or after that. You do not see them in /proc, or you see them
and their count does not increase?

As for where they come from, this is not a mystery, the reschedule
IPI is triggered when code on one cpu changes the scheduler state
(wakes up a thread for instance) on another cpu. If you want to
avoid it, do not do that. That means, do not share mutex between
threads running on different cpus, pay attention for timers to be
running on the same cpu as the thread they signal, etc...

The APC virq is used to multiplex several services, which you can
find by grepping the sources for rthal_apc_alloc:
./ksrc/skins/posix/apc.c:   pse51_lostage_apc = 
rthal_apc_alloc("pse51_lostage_handler",
./ksrc/skins/rtdm/device.c: rtdm_apc = rthal_apc_alloc("deferred RTDM 
close", rtdm_apc_handler,
./ksrc/nucleus/registry.c:  rthal_apc_alloc("registry_export", 
®istry_proc_schedule, NULL);
./ksrc/nucleus/pipe.c:  rthal_apc_alloc("pipe_wakeup", &xnpipe_wakeup_proc, 
NULL);
./ksrc/nucleus/shadow.c:rthal_apc_alloc("lostage_handler", 
&lostage_handler, NULL);
./ksrc/nucleus/select.c:xnselect_apc = 
rthal_apc_alloc("xnselectors_destroy",

It would be interesting to know which of these services is triggered
a lot. One possibility I see would be root thread priority
inheritance, so it would be caused by mode switches. This brings the
question: do your application have threads migrating between primary
and secondary mode, do you see the count of mode switches increase
with the kernel changes, do you have root thread priority
inheritance enabled?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-06 Thread Gilles Chanteperdrix
On Mon, Jun 06, 2016 at 09:03:40AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-02 um 10:23 schrieb Gilles Chanteperdrix:
> > On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> >>> On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>>> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>>>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >>>>>> Dear all,
> >>>>>>
> >>>>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 3.0.43" to
> >>>>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The system
> >>>>>> is now up and running and works stable. Unfortunately we see a
> >>>>>> difference in the performance. Our old combination (XENOMAI 2.6.2.1 +
> >>>>>> Linux 3.0.43) was slightly faster.
> >>>>>>
> >>>>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in our old
> >>>>>> system.  Every call of xnpod_schedule_handler interrupts our main
> >>>>>> XENOMAI task with priority = 95.
> >>>>>>
> >>>>>> I have compared the configuration of both XENOMAI versions but did not
> >>>>>> found any difference. I checked the source code (new commits) but did
> >>>>>> also not find a solution.
> >>>>> Have you tried Xenomai 2.6.4 with Linux 3.0.43 ? In order to see
> >>>>> whether it comes from the kernel update or the Xenomai udpate?
> >>>> I've tried Linux 3.0.43 with Xenomai 2.6.4 an there is no difference to
> >>>> Xenomai 2.6.2.1
> >>>> Looks like there is an other reason than Xenomai.
> >>> Ok, one thing to pay attention to on imx6 is the L2 cache write
> >>> allocate policy. You want to disable L2 write allocate on imx6 to
> >>> get low latencies. I do not know which patches exactly you are
> >>> using, so it is difficult to check, but the kernel normally displays
> >>> the value set in the L2 auxiliary configuration register, you can
> >>> check in the datasheet if it means that L2 write allocate is
> >>> disabled or not. And check if you get the same value with 3.0 and
> >>> 3.10.
> >> Thank you for this hint, I looked around in the kernel config, but cant
> >> find
> >> an option sounds like L2 write allocate.
> >> The only option I found was CACHE_L2X0 and that is activated on both
> >> kernels.
> >> Do you have an idea whats the name of this configuration or where in the
> >> kernel sources it should be located, so I can find out whats the name of
> >> the
> >> config flag by searching the sourcecode.
> > I never talked about any kernel configuration option. I am talking
> > checking the value passed to the L2 cache auxiliary configuration
> > register, this is a hardware register. Also, as I said, the value
> > passed to the L2 cache auxiliary register is printed by the kernel
> > during boot.
> >
> >
> Sorry Gilles,
> I found the message in the kernel log, you are right they are different
> Kernel 3.0.43 shows   l2x0: 16 ways, CACHE_ID 0x41c8, AUX_CTRL 
> 0x0285, Cache size: 524288 B
> Kernel 3.10.53 shows l2x0: 16 ways, CACHE_ID 0x41c8, AUX_CTRL 
> 0x32c5, Cache size: 524288 B
> Kernel 3.10.53 sets addidtional the bits 22 (Shared attribute override 
> enable), 28 (Data prefetch) and 29 (Instruction prefetch)
> I used the same settings on Kernel 3.0.43 but the perfromance didn't 
> change, looks like this configurations didn't slow down my
> system.
> 
> What I have seen while searching the kernel config was that there are a 
> few errate that are activated as dependency in 3.10.53,
> to be sure none of the errata is the source of my performance reduction 
> I activated them on 3.0.43 as well.
> But again no difference to our default configuration.
> 
> To avoid our application is running slower I created a shell-script 
> incrementing a variable
> 10.000 times and measuring the runtime with time
> 
> #!/bin/sh
> var=0
> while [  $var -lt $1 ]; do
>  let var++
> done
> 
>  > time /mnt/drive-C/CpuTime.sh 1
> 
> On this test
> Kernel 3.0.43 Xenomai 2.6.2.1  needs 480 ms
> Kernel 3.10.53  Xenomai 2.6.4  needs 820ms

If you run the

Re: [Xenomai] Migrating a Xenomai 2.6.4 to a 4.6 kernel

2016-06-03 Thread Gilles Chanteperdrix
On Fri, Jun 03, 2016 at 04:20:23PM +0200, Jean-Michel Hautbois wrote:
> I am scared by the compatibility between the existing  2.6 and the
> now-to-be-used 3.x.

As I said, the migration process is documented:
https://xenomai.org/migrating-from-xenomai-2-x-to-3-x/

And not all that scary at all.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Migrating a Xenomai 2.6.4 to a 4.6 kernel

2016-06-03 Thread Gilles Chanteperdrix
On Fri, Jun 03, 2016 at 11:13:39AM +0200, Gilles Chanteperdrix wrote:
> - add to the I-pipe patches the "I-pipe legacy" support, which was
> removed from 4.1

The I-pipe legacy support is still there in the I-pipe patch for
Linux 4.1

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Migrating a Xenomai 2.6.4 to a 4.6 kernel

2016-06-03 Thread Gilles Chanteperdrix
On Fri, Jun 03, 2016 at 09:55:34AM -0400, Lennart Sorensen wrote:
> On Fri, Jun 03, 2016 at 11:13:39AM +0200, Gilles Chanteperdrix wrote:
> > The APIs have not changed much. The cost is not very high. The
> > changes are documented here:
> > https://xenomai.org/migrating-from-xenomai-2-x-to-3-x/
> > 
> > You will have to:
> > - port the I-pipe patch for Linux 4.1 to Linux 4.6
> 
> Would that not be required for xenomai 3.x as well though?
> 
> > - add to the I-pipe patches the "I-pipe legacy" support, which was
> > removed from 4.1
> > - add the missing bits to Xenomai to adapt to the kernel API changes
> > between Linux 3.18 and Linux 4.6, of which there should be a fair
> > amount, given the difference in version numbers.
> > 
> > So, the best strategy is to not migrate xenomai 2.6.4 to a 4.6
> > kernel.
> > 
> > > Would you be interested by the patches once done ?
> > 
> > No. Xenomai 2.6.5 is about to be released, will not support 4.x
> > kernel, and will be the last release in the 2.6 branch. We simply do
> > not have the man power to maintain two branches, and for this
> > reason, want to encourage the users to migrate.
> 
> That does make sense.
> 
> Of course a lot of people are scared of moving xenomai versions.  3 sounds
> quite different than 2.6, and I certainly remember having a fair bit of
> pain going from 2.5 to 2.6 years ago (some of it was probably self inflicted).

As far as I remember there were almost no API changes between 2.5
and 2.6. The main reason to switch from 2.5 to 2.6 was to fix an
issue that could not be fixed reasonably without breaking the ABI.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Migrating a Xenomai 2.6.4 to a 4.6 kernel

2016-06-03 Thread Gilles Chanteperdrix
On Fri, Jun 03, 2016 at 10:06:10AM +0200, Jean-Michel Hautbois wrote:
> Hi !
> 
> I have a 3.18 kernel which is currently built with xenomai 2.6.4.
> I need to keep xenomai 2.6.4, as a lot of applications are using the API,
> and I don't want to migrate, the cost would be very high ATM.

The APIs have not changed much. The cost is not very high. The
changes are documented here:
https://xenomai.org/migrating-from-xenomai-2-x-to-3-x/

> 
> What is the best strategy to migrate xenomai to a 4.6 kernel ?

You will have to:
- port the I-pipe patch for Linux 4.1 to Linux 4.6
- add to the I-pipe patches the "I-pipe legacy" support, which was
removed from 4.1
- add the missing bits to Xenomai to adapt to the kernel API changes
between Linux 3.18 and Linux 4.6, of which there should be a fair
amount, given the difference in version numbers.

So, the best strategy is to not migrate xenomai 2.6.4 to a 4.6
kernel.

> Would you be interested by the patches once done ?

No. Xenomai 2.6.5 is about to be released, will not support 4.x
kernel, and will be the last release in the 2.6 branch. We simply do
not have the man power to maintain two branches, and for this
reason, want to encourage the users to migrate.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-02 Thread Gilles Chanteperdrix
On Thu, Jun 02, 2016 at 10:15:41AM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-06-01 um 16:12 schrieb Gilles Chanteperdrix:
> > On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> >>
> >> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> >>> On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >>>> Dear all,
> >>>>
> >>>> we have moved our application from "XENOMAI 2.6.2.1 + Linux 3.0.43" to
> >>>> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The system
> >>>> is now up and running and works stable. Unfortunately we see a
> >>>> difference in the performance. Our old combination (XENOMAI 2.6.2.1 +
> >>>> Linux 3.0.43) was slightly faster.
> >>>>
> >>>> At the moment it looks like that XENOMAI 2.6.4 calls
> >>>> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in our old
> >>>> system.  Every call of xnpod_schedule_handler interrupts our main
> >>>> XENOMAI task with priority = 95.
> >>>>
> >>>> I have compared the configuration of both XENOMAI versions but did not
> >>>> found any difference. I checked the source code (new commits) but did
> >>>> also not find a solution.
> >>> Have you tried Xenomai 2.6.4 with Linux 3.0.43 ? In order to see
> >>> whether it comes from the kernel update or the Xenomai udpate?
> >> I've tried Linux 3.0.43 with Xenomai 2.6.4 an there is no difference to
> >> Xenomai 2.6.2.1
> >> Looks like there is an other reason than Xenomai.
> > Ok, one thing to pay attention to on imx6 is the L2 cache write
> > allocate policy. You want to disable L2 write allocate on imx6 to
> > get low latencies. I do not know which patches exactly you are
> > using, so it is difficult to check, but the kernel normally displays
> > the value set in the L2 auxiliary configuration register, you can
> > check in the datasheet if it means that L2 write allocate is
> > disabled or not. And check if you get the same value with 3.0 and
> > 3.10.
> Thank you for this hint, I looked around in the kernel config, but cant 
> find
> an option sounds like L2 write allocate.
> The only option I found was CACHE_L2X0 and that is activated on both 
> kernels.
> Do you have an idea whats the name of this configuration or where in the
> kernel sources it should be located, so I can find out whats the name of 
> the
> config flag by searching the sourcecode.

I never talked about any kernel configuration option. I am talking
checking the value passed to the L2 cache auxiliary configuration
register, this is a hardware register. Also, as I said, the value
passed to the L2 cache auxiliary register is printed by the kernel
during boot.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] SIGDEBUG_RESCNT_IMBALANCE with recursive mutex

2016-06-01 Thread Gilles Chanteperdrix
On Wed, Jun 01, 2016 at 11:07:35AM -0400, Jeffrey Melville wrote:
> Hi,
> 
> Setup: Xenomai 2.6.4 (actually 2.6 git rev 4f349cf0553, with a99426
> cherry-picked) with kernel 3.14.17 on a Zynq and the POSIX skin using
> the ipipe patches included with the specified git rev)
> 
> We've noticed that SIGDEBUG_RESCNT_IMBALANCE is generated when a
> (Xenomai) mutex is taken recursively by an NRT thread. The snippet at
> the bottom of this message will reproduce the issue. I omitted most of
> the error-checking for brevity.
> 
> A couple previous threads have discussed slightly similar problems, but
> I never saw final resolutions:
> http://www.xenomai.org/pipermail/xenomai/2012-January/025278.html
> http://www.xenomai.org/pipermail/xenomai/2014-October/031919.html
> 
> As far as "why are we doing this?", the problem area occurs in a test
> suite where some tests have to run as NRT threads because they don't
> have to run real-time and will get killed by the watchdog if they run as
> RT threads. Removing the Xenomai wrappers would also be complicated for
> reasons that are outside of the scope of this email.

Yeah well, this is an issue that has been known and fixed for so
long that I forgot we knew it:

https://git.xenomai.org/xenomai-3.git/commit/?id=79f0dd1cdc408b22afe301fa03805349a4a9f151

I will try and backport this change to 2.6 in 2.6.5. The change
should be easy to backport since it was made prior to most of the
cleanup of mutex and condvars. In 3.x, the kernel does not handle at
all the recursive mutex recursion count, this makes things much
simpler, but the code very different.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] SIGDEBUG_RESCNT_IMBALANCE with recursive mutex

2016-06-01 Thread Gilles Chanteperdrix
On Wed, Jun 01, 2016 at 11:07:35AM -0400, Jeffrey Melville wrote:
> Hi,
> 
> Setup: Xenomai 2.6.4 (actually 2.6 git rev 4f349cf0553, with a99426
> cherry-picked) with kernel 3.14.17 on a Zynq and the POSIX skin using
> the ipipe patches included with the specified git rev)
> 
> We've noticed that SIGDEBUG_RESCNT_IMBALANCE is generated when a
> (Xenomai) mutex is taken recursively by an NRT thread. The snippet at
> the bottom of this message will reproduce the issue. I omitted most of
> the error-checking for brevity.
> 
> A couple previous threads have discussed slightly similar problems, but
> I never saw final resolutions:
> http://www.xenomai.org/pipermail/xenomai/2012-January/025278.html

This issue is unrelated: setting a thread priority while holding a
mutex is clearly something we consider you should not be doing.

> http://www.xenomai.org/pipermail/xenomai/2014-October/031919.html

In this case, the mail asked the user to provide a test case, and
the user never provided one, it seems.

> 
> As far as "why are we doing this?", the problem area occurs in a test
> suite where some tests have to run as NRT threads because they don't
> have to run real-time and will get killed by the watchdog if they run as
> RT threads. Removing the Xenomai wrappers would also be complicated for
> reasons that are outside of the scope of this email.

Now we have a testcase it seems. However, a Xenomai mutex used by an
NRT thread only makes sense if the mutex is shared with an RT thread
(otherwise you could use plain Linux mutex). In that context, it
makes little sense to not enable priority inheritance on the mutex.
So, the question is: do you have the same problem if you enable
priority inheritance?

Also, could you check the function return values, to make sure that
you did not miss any error?

Regards.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-06-01 Thread Gilles Chanteperdrix
On Wed, Jun 01, 2016 at 03:52:06PM +0200, Wolfgang Netbal wrote:
> 
> 
> Am 2016-05-31 um 16:16 schrieb Gilles Chanteperdrix:
> > On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> >> Dear all,
> >>
> >> we have moved our application from "XENOMAI 2.6.2.1 + Linux 3.0.43" to
> >> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The system
> >> is now up and running and works stable. Unfortunately we see a
> >> difference in the performance. Our old combination (XENOMAI 2.6.2.1 +
> >> Linux 3.0.43) was slightly faster.
> >>
> >> At the moment it looks like that XENOMAI 2.6.4 calls
> >> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in our old
> >> system.  Every call of xnpod_schedule_handler interrupts our main
> >> XENOMAI task with priority = 95.
> >>
> >> I have compared the configuration of both XENOMAI versions but did not
> >> found any difference. I checked the source code (new commits) but did
> >> also not find a solution.
> > Have you tried Xenomai 2.6.4 with Linux 3.0.43 ? In order to see
> > whether it comes from the kernel update or the Xenomai udpate?
> I've tried Linux 3.0.43 with Xenomai 2.6.4 an there is no difference to 
> Xenomai 2.6.2.1
> Looks like there is an other reason than Xenomai.

Ok, one thing to pay attention to on imx6 is the L2 cache write
allocate policy. You want to disable L2 write allocate on imx6 to
get low latencies. I do not know which patches exactly you are
using, so it is difficult to check, but the kernel normally displays
the value set in the L2 auxiliary configuration register, you can
check in the datasheet if it means that L2 write allocate is
disabled or not. And check if you get the same value with 3.0 and
3.10.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] rtdm_timer_start causes switching to secondary mode and Bug

2016-06-01 Thread Gilles Chanteperdrix
On Wed, Jun 01, 2016 at 01:03:18PM +, Ali Umair wrote:
> Hello
> 
> Thanks Gilles and sorry for first incomplete email. The code files and kernel 
> log files are attached with this email

Ok, I see you have not read "Linux devices drivers" as I advised. I
am not going to tell you everything which is wrong in that driver,
because there are to many things really, beginner's programming
error with using memset or off by one errors in indexes, ignorance
of best practices in error handling (in kernel modules or
otherwise), disregard to concurrence issues, disregard to support of
multiple simultaneous file descriptors.

I do see a lot of things wrong in your code, I do not see which is
causing rtdm_timer_start to fail. But I am sure there is nothing
wrong with Xenomai's rtdm_timer_start and that the problem is in
your code. You can see that rtdm_timer_start is working by using
latency -t 1 for instance. If latency -t 1 causes an issue on your
platform, then we will have a look.

Regards.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] rtdm_timer_start causes switching to secondary mode and Bug

2016-06-01 Thread Gilles Chanteperdrix
On Wed, Jun 01, 2016 at 12:15:57PM +, Ali Umair wrote:
> Hello
> 
> I am using 3.14.39-xenomai-3.0.1. I am trying following

Please use Xenomai 3.0.2

> 
> Kernel module:
> - Make a buffer i.e. char  buffer[4000][126] and initializing the buffer 
> using memset in the initial thread
> - Filled the buffer in the read thread of module 
> - After filling the buffer start the timer 
> 
> As I start the timer after filling the buffer the kernel displays the message 
> that switching to secondary  mode and A bug. The whole message is end of the 
> email

The data you sent are unreadable. If you want help, and do not want
to give all the details asked by the
https://xenomai.org//asking-for-help/ page, then at least send
properly formatted logs.

Anyway, very much likely the timer address you pass to
rtdm_timer_start is not initialized, or the address you pass is
invalid, or becomes invalid because you used an object on stack.

In such a case, posting the code of the module would help: the
description you send describes what you think you are doing, the
code shows what you are really doing, and usually, if there is a
bug, it is because there is a difference between the two.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Performance impact after switching from 2.6.2.1 to 2.6.4

2016-05-31 Thread Gilles Chanteperdrix
On Tue, May 31, 2016 at 04:09:07PM +0200, Wolfgang Netbal wrote:
> Dear all,
> 
> we have moved our application from "XENOMAI 2.6.2.1 + Linux 3.0.43" to 
> "XENOMAI 2.6.4. + Linux 3.10.53". Our target is an i.MX6DL. The system 
> is now up and running and works stable. Unfortunately we see a 
> difference in the performance. Our old combination (XENOMAI 2.6.2.1 + 
> Linux 3.0.43) was slightly faster.
> 
> At the moment it looks like that XENOMAI 2.6.4 calls 
> xnpod_schedule_handler much more often then XENOMAI 2.6.2.1 in our old 
> system.  Every call of xnpod_schedule_handler interrupts our main 
> XENOMAI task with priority = 95.
> 
> I have compared the configuration of both XENOMAI versions but did not 
> found any difference. I checked the source code (new commits) but did 
> also not find a solution.

Have you tried Xenomai 2.6.4 with Linux 3.0.43 ? In order to see
whether it comes from the kernel update or the Xenomai udpate?

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : lib/cobalt: Provide RT-capable usleep

2016-05-31 Thread Gilles Chanteperdrix
On Tue, May 31, 2016 at 01:32:16PM +0200, Gilles Chanteperdrix wrote:
> On Tue, May 31, 2016 at 01:06:23PM +0200, Jan Kiszka wrote:
> > On 2016-05-30 17:20, Gilles Chanteperdrix wrote:
> > > On Mon, May 30, 2016 at 04:39:42PM +0200, git repository hosting wrote:
> > >> Module: xenomai-jki
> > >> Branch: for-forge
> > >> Commit: ec9a8c81944d4a3e3f50af314fa58ade68dd28c2
> > >> URL:
> > >> http://git.xenomai.org/?p=xenomai-jki.git;a=commit;h=ec9a8c81944d4a3e3f50af314fa58ade68dd28c2
> > >>
> > >> Author: Jan Kiszka 
> > >> Date:   Mon May 30 14:58:07 2016 +0200
> > >>
> > >> lib/cobalt: Provide RT-capable usleep
> > >>
> > >> User may expect this (probably last) sleeping service to be available
> > >> under Cobalt just like sleep, nanosleep & Co.
> > > 
> > > CONFORMING TO
> > >4.3BSD,  POSIX.1-2001.   POSIX.1-2001  declares this function 
> > > obsolete;
> > >use nanosleep(2) instead.  POSIX.1-2008 removes  the  
> > > specification  of
> > >usleep().
> > 
> > Do you expect people - and specifically Linux - to follow this soon?
> > 
> > The idea here is to reduce the level of surprise, and as long as using
> > usleep in your app doesn't cause an "undefined symbol" error, this risk
> > persists.
> 
> The thing is adding services do not come for free: it incurs a
> maintenance cost. From my point of view, one of the imperatives of
> the Xenomai project is to keep the code base small (and simple) so
> that it remains possible to maintain it with the little work force
> we have. From this point of view, adding a wrapper for a service
> that has been deprecated for 15 years, and removed from the POSIX
> specification for 8 years does not look like a smart move. And every
> time we want to add a service, we should wonder if it is worth the
> maintenance burden it incurs, this is the reason why I questioned
> adding pthread_setschedprio too.
> 
> That being said, I am not opposed to this, and will happily let
> other people decide. I just wanted to make sure that everyone
> realized that usleep has been dead a long time.
> 
> If people use usleep, they will get a switch to secondary mode and
> realize they should not use it, it is not as if it could cause a
> crash or something. We might even document that somewhere.

And replacing usleep with nanosleep in the application code does not
make the application less portable, quite the contrary in fact, it
makes it compatible with POSIX 2008. And if the application really
really want to remain non portable it can provide an usleep
implementation on its side which calls nanosleep under the hood.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : lib/cobalt: Provide RT-capable usleep

2016-05-31 Thread Gilles Chanteperdrix
On Tue, May 31, 2016 at 01:06:23PM +0200, Jan Kiszka wrote:
> On 2016-05-30 17:20, Gilles Chanteperdrix wrote:
> > On Mon, May 30, 2016 at 04:39:42PM +0200, git repository hosting wrote:
> >> Module: xenomai-jki
> >> Branch: for-forge
> >> Commit: ec9a8c81944d4a3e3f50af314fa58ade68dd28c2
> >> URL:
> >> http://git.xenomai.org/?p=xenomai-jki.git;a=commit;h=ec9a8c81944d4a3e3f50af314fa58ade68dd28c2
> >>
> >> Author: Jan Kiszka 
> >> Date:   Mon May 30 14:58:07 2016 +0200
> >>
> >> lib/cobalt: Provide RT-capable usleep
> >>
> >> User may expect this (probably last) sleeping service to be available
> >> under Cobalt just like sleep, nanosleep & Co.
> > 
> > CONFORMING TO
> >4.3BSD,  POSIX.1-2001.   POSIX.1-2001  declares this function 
> > obsolete;
> >use nanosleep(2) instead.  POSIX.1-2008 removes  the  specification  
> > of
> >usleep().
> 
> Do you expect people - and specifically Linux - to follow this soon?
> 
> The idea here is to reduce the level of surprise, and as long as using
> usleep in your app doesn't cause an "undefined symbol" error, this risk
> persists.

The thing is adding services do not come for free: it incurs a
maintenance cost. From my point of view, one of the imperatives of
the Xenomai project is to keep the code base small (and simple) so
that it remains possible to maintain it with the little work force
we have. From this point of view, adding a wrapper for a service
that has been deprecated for 15 years, and removed from the POSIX
specification for 8 years does not look like a smart move. And every
time we want to add a service, we should wonder if it is worth the
maintenance burden it incurs, this is the reason why I questioned
adding pthread_setschedprio too.

That being said, I am not opposed to this, and will happily let
other people decide. I just wanted to make sure that everyone
realized that usleep has been dead a long time.

If people use usleep, they will get a switch to secondary mode and
realize they should not use it, it is not as if it could cause a
crash or something. We might even document that somewhere.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [Xenomai-git] Jan Kiszka : lib/cobalt: Provide RT-capable usleep

2016-05-30 Thread Gilles Chanteperdrix
On Mon, May 30, 2016 at 04:39:42PM +0200, git repository hosting wrote:
> Module: xenomai-jki
> Branch: for-forge
> Commit: ec9a8c81944d4a3e3f50af314fa58ade68dd28c2
> URL:
> http://git.xenomai.org/?p=xenomai-jki.git;a=commit;h=ec9a8c81944d4a3e3f50af314fa58ade68dd28c2
> 
> Author: Jan Kiszka 
> Date:   Mon May 30 14:58:07 2016 +0200
> 
> lib/cobalt: Provide RT-capable usleep
> 
> User may expect this (probably last) sleeping service to be available
> under Cobalt just like sleep, nanosleep & Co.

CONFORMING TO
   4.3BSD,  POSIX.1-2001.   POSIX.1-2001  declares this function obsolete;
   use nanosleep(2) instead.  POSIX.1-2008 removes  the  specification  of
   usleep().

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] "watchdog triggered" error & irq interrupt

2016-05-27 Thread Gilles Chanteperdrix
On Thu, May 26, 2016 at 07:52:20PM +0300, Ran Shalit wrote:
> On Thu, May 26, 2016 at 7:28 PM, Gilles Chanteperdrix
>  wrote:
> > On Thu, May 26, 2016 at 06:57:49PM +0300, Ran Shalit wrote:
> >> Hello,
> >>
> >> On testing interrupt within kernel it seems to work ok.
> >> I then moved to userspace rt thread waiting on events coming from the
> >> rtfm kernel driver (witch signals with rtd_event_signal)
> >>
> >> But then I get the following errors:
> >>
> >> dma0chan1-copy0: #15195: got completion callback, but status is 'in 
> >> progress'
> >> dma0chan3-copy0: #14335: test timed out
> >> dma0chan3-copy0: #14336: got completion callback, but status is 'in 
> >> progress'
> >> Xenomai: watchdog triggered -- signaling runaway thread 'rtdm'
> >> [sched_delayed] sched: RT throttling activated
> >> Xenomai: RTDM: closing file descriptor 0.
> >> CPU time limit exceeded
> >>
> >> I understand that it mean that cpu is probably in busy wait.
> >> The interrupt is done on raising edge, and I configure it as following:
> >> rtdm driver:
> >> 
> >> ret = rtdm_irq_request(&ctx->irq_handle,
> >>  91, PFI_IRQHandler, RTDM_IRQTYPE_EDGE,
> >>  DRVNAM, ctx);
> >>
> >> userspace real-time thread
> >> ===
> >>
> >> while (1) {
> >> rt_printf("11\n");
> >> if (ioctl(file_desc, RTTST_RTIOC_GPIOIRQ_WAIT_IRQ))
> >> {
> >>rt_printf("failed!");

You should break from the loop here, otherwise if the ioctl
constantly fails, you will have an endless loop (and the message
will never be printed).

> >>  };
> >>  rt_printf("22\n");
> >
> > If what follows is what is necessary to acknowledge the irq at the
> > device level, it has to be done in the interrupt handler. Otherwise
> > when the interrupt handler has finished executing, interrupts are
> > enabled, and the interrupt triggers again, leaving no chance to the
> > interrupt handler to run. Alternatively, you may want to disable the
> > interrupt line in the interrupt handler and reenable it after the
> > thread has run. Note however that it increases the time necessary to
> > handle the interrupt and may result in lost interrupts (if they
> > happen while the line is masked).
> 
> That followed code is not for acknoweledging the irq but just for
> reading timer ticks.

Well, Xenomai provides functions for that which will more clearly
indicate your intent to those reading your code.

> The irq handler in rtdm driver handles the raising edge:
> 
> static int PFI_IRQHandler(rtdm_irq_t *irq_handle)
> {
> struct rtdm_test_context *ctx;
> 
> ctx = rtdm_irq_get_arg(irq_handle, struct rtdm_test_context);
>   rtdm_event_signal(&ctx->irq_event);
>return RTDM_IRQ_HANDLED;
> }
> 
> I suppose that cpu (zynq) should
> disable the interrupt becuase it is defined as raising edge in device tree:
> 
> fpga_device_tree@8000 {
> compatible = "xillybus,xillybus_lite_of-1.00.a";
> reg = < 0x8000 0x40 >;
> interrupts = < 0 59 1 >;
> interrupt-parent = <&ps7_scugic_0>;
> } ;
> 
> How can I be sure that the issue is that the interrupts are not acknowledged ?

RTDM itself does not take the device tree settings into account. So,
if Linux does not do it if you do not call request_irq, nobody took
these settings into account. In order to be sure, I would check the
interrupt controller registers to see if it is correctly configured
in rising edge mode.

> >> https://drive.google.com/folderview?id=0B22GsWueReZTS3RaV3gzRk9aZzQ&usp=sharing
> >
> > This code is crap. If you want people to review your code, please
> > make it short and remove the cruft that you do not use, and remove
> > the dead code too. As I already told you, you should start from the
> > "skeleton" code, not from the RTDM test driver which is a unit test
> > for the RTDM API.
> 
> That's Right...
> I made many many tests moving from one attitude to another, failing in
> all this trials.
> I've also tried to use interrupts in userspace (user_irq.c
> example),

Don't. Handling interrupts in user-space is a bad idea.

> but could not get any interrupt with that method, so I moved back to
> rtdm driver, although doing it all in userspace, if it worked, could
> be better.

I do not really care, if you want me to read your code, send short
and readable code.

-- 
Gilles.
https://click-hack.org

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


  1   2   3   4   5   6   7   8   9   10   >