Re: initial porting of IPIPE x86_64 patches onto Linux stable 5.4.52

2020-08-25 Thread Steven Seeger via Xenomai
On Tuesday, August 25, 2020 9:25:00 AM EDT Lennart Sorensen wrote:
> A 5020 is an e5500, which is nothing like an e500.  The e500 is PowerpC
> SPE, while the e5500 (and the e300, e500mc and e6500) are "real" PowerPC
> instead with a somewhat different instruction set.

Sorry e500 was a typo in that case.

> GCC 9 even dropped support for the PPC SPE, while PPC is fully supported.
> 

Yeah there are e500 boards being made for military, aerospace, and LEO (low 
earth orbit) 
uses. There is concern that gcc 9 dropped SPE support, but it's up to vendors 
to support 
maintenance of it.

> 
> The e500's existence offends me since it fragments the powerpc
> architecture.  Yuck. :)

Well I won't say anything good or bad about it, but it's the one I know the 
most since it's 
what I work with all the time. ;)

The worst thing I ever had to deal with was making changes to a 603 instruction 
set 
simulator to make it be a 601. The subtle differences between POWER and PowerPC 
ISAs 
were quite a pain to find at the time. This was years ago.

But yeah all my work (and tested support with xenomai) have been on a e500v2. 
(8548)

Steven



Re: initial porting of IPIPE x86_64 patches onto Linux stable 5.4.52

2020-08-24 Thread Steven Seeger via Xenomai
> On Mon, Aug 24, 2020 at 2:09 AM Jan Kiszka  wrote:
> > Greg, with this baseline available, porting ARM/ARM64 to 5.4 should
> > become possible. Thought I would wait until we have basic confidence on
> > x86 into the generic pieces. Steven, same for powerpc.

I'm definitely interested in doing this for PPC e500 32-bit. I now have my 
hands on a 64-bit 
e500 based board (PPC 5020) but it's a loaner COTS board from an industry 
partner. I 
probably won't be able to do Xenomai on that without a funding source, because 
of my 
other obligations. But I am reasonably sure that moving to 5.4 for e500 32-bit 
is something 
my partners would support. I get emails occasionally for more modern kernels.

If this follows the old noarch pattern for the baseline stuff, then that should 
make it ideal 
for bringing the rest of it up to date.

Steven



Re: Dovetail <-> PREEMPT_RT hybridization

2020-07-23 Thread Steven Seeger via Xenomai
On Thursday, July 23, 2020 12:23:53 PM EDT Philippe Gerum wrote:
> Two misunderstandings it seems:
> 
> - this work is all about evolving Dovetail, not Xenomai. If such work does
> bring the upsides I'm expecting, then I would surely switch EVL to it. In
> parallel, you would still have the opportunity to keep the current Dovetail
> implementation - currently under validation on top of 5.8 - and maintain it
> for Xenomai, once the latter is rebased over the former. You could also
> stick to the I-pipe for Xenomai, so no issue.

That may be my misunderstanding. I thought Dovetail's ultimate goal is at 
least the performance of IPIPE but being simpler to maintain.

> - you seem to be assuming that every code paths of the kernel is
> interruptible with the I-pipe/Dovetail, this is not the case, by far. Some
> keys portions run with hard irqs off, just because there is no other way to
> 1) share some code paths between the regular kernel and the real-time core,
> 2) the hardware may require it (as hinted in my introductory post). Some of
> those sections may take ages under cache pressure (switch_to comes to
> mind), tenths of micro-seconds, happening mostly randomly from the
> standpoint of the external observer (i.e. you, me). So much for quantifying
> timings by design.

So with switch_to having hard irqs off, the cache pressure should be 
deterministic because there's an upper bound on cache lines, the number of 
memory pages that need to be accessed, and the code path is pretty straight 
forward if memory serfves. I would think that this being well bounded should 
serve to my initial point.

> 
> We can only figure out a worst-case value by submitting the system to a
> reckless stress workload, for long enough. This game of sharing the very
> same hardware between GPOS and a RTOS activities has been based on a
> probabilistic approach so far, which can be summarized as: do your best to
> keep the interrupts enabled as long as possible, ensure fine-grained
> preemption of tasks, make sure to give the result hell to detect issues,
> and hope for the hardware not to rain on the parade.

I agree that in practice, a reckless stress workload is necessary to quantify 
system latency. However, relying on this is a problem when it comes time to 
convince managers who want to spend tons of money for expensive and proven OS 
solutions instead of using the fun and cool stuff we do. ;)

At some point, if possible, someone should try and actually prove the system 
given the bounds.

1) There's only so many pages of memory
2) There's only so much cache and so many cache lines
3) There's only so many sources of interrupts
4) There's only so many sources of CPU stalls where those number of stalls 
should have a limit in hardware.

I can't really think of anything else, but I don't know why there'd be any 
sort of randomness on top of this.

One thing we might be not on the same page of is that typically (especially 
single processor systems) when I talk about timing by design calculations I am 
referring to one single high priority thing. That could be a timer interrupt 
to the first instruction running in that timer interrupt handler, or it could 
be to the point where the highest priority thread in the system resumes.

> 
> Back to the initial point: virtualizing the effect of the local_irq helpers
> you refer to is required when their use is front and center in serializing
> kernel activities. However, in a preempt-rt kernel, most interrupt handlers
> are threaded, regular spinlocks are blocking mutexes in disguise, so what
> remains is:

Yes but this depends on a cooperative model. Other drivers can mess you up, as 
described by you below.

> 
> - sections covered by the raw_spin_lock API, which is primarily a problem
> because we would spin with hard irqs off attempting to acquire the lock.
> There is a proven technical solution to this based on a application of
> interrupt pipelining.

Yes.
 
> - few remaining local_irq disabled sections which may run for too long, but
> could be relaxed enough in order for the real-time core to preempt without
> prejudice. This is where pro-actively tracing the kernel under stress comes
> into play.

This is my problem with preempt-rt. Ipipe forces this preemption by changing 
what the macros do that linux devs think is turning interrupts off. We never 
need to worry about this in the RTOS domain.
 
> Working on these three aspects specifically does not bring less guarantees
> than hoping for no assembly code to create long uninterruptible section
> (therefore not covered by local_irq_* helpers), no driver talking to a GPU
> killing latency with CPU stalls, no shared cache architecture causing all
> sort of insane traffic between cache levels, causing memory access speed to
> sink and overall performances to degrade.

I havne't had a chance to work with these sorts of systems but we are doing 
more wuth arm processors with multi-level MMU and I'm very curious about how 
this will 

Re: Dovetail <-> PREEMPT_RT hybridization

2020-07-23 Thread Steven Seeger via Xenomai
On Tuesday, July 21, 2020 1:18:21 PM EDT Philippe Gerum wrote:
> 
> - identifying and quantifying the longest interrupt-free sections in the
> target preempt-rt kernel under meaningful stress load, with the irqoff
> tracer. I wrote down some information [1] about the stress workloads which
> actually make a difference when benchmarking as far as I can tell. At any
> rate, the results we would get there would be crucial in order to figure
> out where to add the out-of-band synchronization points, and likely of some
> interest upstream too. I'm primarily targeting armv7 and armv8, it would be
> great if you could help with x86.

So from my perspective, one of the beauties of Xenomai with traditional IPIPE 
is you can analyze the fast interrupt path and see that by design you have an 
upper bound on latency. You can even calculate it. It's based on the number of 
cpu cycles at irq entry multiplied by the total numbers of IRQs that could 
happen at the same time. Depending on your hardware, maybe you know the 
priority of handling the interrupt in question.

The point was the system was analyzable by design.

When you start talking about looking for long critical sections and adding 
sync points in it, I think you take away the by-design guarantees for latency. 
This might make it less-suitable for hard realtime systems.

IMHO this is not any better than Preempt-RT. But maybe I am missing something. 
:)

Steven






Re: [PATCH] powerpc: ipipe: Do full exit checks after __ipipe_call_mayday

2019-12-20 Thread Steven Seeger via Xenomai
Jan,

I took a look at entry-common.S and entry_32.S and I think we have the correct 
check. The flow is a little different, but it seems to work as far as I can 
tell.

This was originally Philippe's code. Maybe he can take a quick look. ;)

Steven






Re: [PATCH] powerpc: ipipe: Do full exit checks after __ipipe_call_mayday

2019-12-20 Thread Steven Seeger via Xenomai
On Thursday, December 19, 2019 11:53:36 AM EST Jan Kiszka wrote:
> > Ho, ho, this is an early X-mas gift.

Jan,

I got "gdb ok" when I removed the block I had in the entry_32.S I sent you 
around the recheck/do_user_signal path. I was pretty sure I didn't need the 
intret there.

Do you remember if you did that as well on your board?

This s the first time the test has passed for me without the extra trace 
prints in smokey's gdb.c. In fact I got the latest xenomai next branch and 
tested with that.

Some good news: your uclibc fix worked beautifully and I can now build the 
stock xenomai distro with my board without any of my other patches.

I'm going to look at the mayday/DoSyscall issue you suggested now. I'll be 
checking in a cleaned up entry_32.S as well. Will probably be Monday or so but 
expect 20 more emails from me until then as per my usual pattern. (You cc'd 
the list, so now the list must suffer the consequences of your actions.)

Steven






Re: [PATCH] powerpc: ipipe: Do full exit checks after __ipipe_call_mayday

2019-12-20 Thread Steven Seeger via Xenomai
Tried running with your patch, Jan. I made sure the files I sent you I had 
worked on were the same I had sent you. I wound up with a crash in 
do_user_signal during the smokey gdb test.

# LD_LIBRARY_PATH=/usr/xenomai/lib /usr/xenomai/bin/smokey --run=gdb
[   10.650031] Unable to handle kernel paging request for instruction fetch
[   10.656762] Faulting instruction address: 0xc0010b00
[   10.661736] Oops: Kernel access of bad area, sig: 11 [#1]
[   10.667124] BE PREEMPT Aitech SP0S100
[   10.670780] Modules linked in: unix
[   10.674268] CPU: 0 PID: 982 Comm: smokey Not tainted 4.19.55-aitech #96
[   10.680871] I-pipe domain: Linux
[   10.684089] NIP:  c0010b00 LR: c0010ad8 CTR: c00a8f1c
[   10.689131] REGS: ee8cbe90 TRAP: 0400   Not tainted  (4.19.55-aitech)
[   10.695559] MSR:  9220   CR: 24000284  XER: 2000
[   10.701998] 
[   10.701998] GPR00:  ee8cbf40 ef156bc0  c0007890 ef156bc0 
 5d084753 
[   10.701998] GPR08: c055b068 0001 c0599a60 00021000 22000282 1004870c 
  
[   10.701998] GPR16:       
 1004 
[   10.701998] GPR24:  b7ffc000 1001e480 1001e4b0 10041198 100425d0 
0002 b9d0 
[   10.736886] NIP [c0010b00] do_user_signal+0x20/0x34
[   10.741755] LR [c0010ad8] recheck+0x48/0x50
[   10.745927] Call Trace:
[   10.748362] Instruction dump:
[   10.751322] 7d400124 48089e21 2c83 4186ffa0 614a8000 7d400124 806100b0 
7061 
[   10.759066] 41820010 bda10044 5463003c 906100b0 <38610010> 7d244b78 
4bff7add b9a10044 
[   10.766985] ---[ end trace d102d53b0f8d6db9 ]---
[   10.771591] 








Re: [PATCH] powerpc: ipipe: Do full exit checks after __ipipe_call_mayday

2019-12-19 Thread Steven Seeger via Xenomai
On Thursday, December 19, 2019 12:14:27 PM EST Jan Kiszka wrote:
> 
> Check ipipe-arm/ipipe/master:arch/arm/kernel/entry-common.S for the call
> to ipipe_call_mayday. I suspect that pattern transfers nicely.
> 
> Jan

Will do. Can you point me to a smokey test that will prove the implemention is 
fixed? Preferably if you have something that shows it's currently broken.

Steven






Re: [PATCH] powerpc: ipipe: Do full exit checks after __ipipe_call_mayday

2019-12-19 Thread Steven Seeger via Xenomai
On Thursday, December 19, 2019 11:53:36 AM EST Jan Kiszka wrote:
> > Ho, ho, this is an early X-mas gift.

Thanks for finding this. I'm a terrible PPC maintainer so if anyone else wants 
to volunteer to take my place ;)
 
> Looking at DoSyscall in entry_32.S, it seems we lack such a careful
> check there as well. But that's too much assembly for me ATM.

Thanks for the tip I should be able to look into this tomorrow for you. Can 
you point me to the relevant sections in arm that I should compare to? 

Any more details (such as desired check/operation) you can provide would be 
beneficial.

Steven






xenomai in space

2019-05-06 Thread Steven Seeger via Xenomai
List,

Early Saturday morning a Space/X dragon rocket launched with STP-H6 on board. 
I wrote some drivers and was on the software architecture board for the 
software on the CIB, communications interface bus on that system. There are 
several (9 if I recall) science experiments that all communicate to ISS 
(International Space Station) networks through our common communications 
interface. This is the first time that I've sent Xenomai to space.

The cargo was successfully delivered to the ISS about 12 hours ago.

Because of our use of Xenomai, we were able to reduce the size of buffers in 
the FPGA which freed up block ram for other things (CPU cache among them) and 
made the system perform better overall. Seeing this arrive at the ISS after 
over a year of testing is a great achievement for us at Goddard Space Flight 
Center and for the Xenomai project.

Thanks to all. Especially to Philippe who answers many of my emails privately. 
;) I hope I can continue to contribute to the Xenomai project in my own small 
way.

Steven






Re: I-pipe / Dovetail news

2019-05-03 Thread Steven Seeger via Xenomai
On Thursday, May 2, 2019 12:46:37 PM EDT Philippe Gerum wrote:
> At the end of this process, a Dovetail-based Cobalt core should be
> available for the ARM, ARM64 and x86_64 architectures. The port is made
> in a way that enables the Cobalt core to interface with either the
> I-pipe or Dovetail at build time. ppc32 is likely to follow at some
> point if Steven is ok with this. I could probably help with that, I
> still have my lite52xx workhorse around, and a few 40x and 44x SoCs
> generously offered by Denx Engineering (thanks Wolfgang).

Philippe, I'm ok with helping. Honestly I am thinking that it may be beneficial 
for us to move to Dovetail since an arinc-653 implementation on top of EVL 
might be simpler to cerify that one on top of Cobalt. Plus, it should be more 
efficient since we'd lose a compatibility layer. I'm going to be sending an 
email to my co-workers here shortly about the topic.

I figured the first step would be a dovetail port. Are you suggesting you'd 
rather, if I have time to help, prioritize my helping with a Cobalt over EVL 
port on ppc32?

I also have some boards from Wolfgang.

Steven






Re: __ipipe_root_sync

2019-04-26 Thread Steven Seeger via Xenomai
On Friday, April 26, 2019 2:11:30 PM EDT Philippe Gerum wrote:
> 
> However, __ipipe_root_sync() is 100% redundant with sync_root_irqs(),
> which we need in the generic pipelined syscall handling callable from C.
> So the situation is a bit silly ATM. Let's rename sync_root_irqs() to
> __ipipe_root_sync() in -noarch, and drop any arch-local equivalent.

I fully agree with this. I can issue a second patch for 110 once it's done in 
noarch, but there's no real reason to hurry on this.

I can confirm that the 4.14.110 patch I just pushed for powerpc (for all the 
thousands of you working with PPC) that the use of __ipipe_root_sync and 
sync_root_irqs is equivalent with ARM.

Steven






release spam

2019-04-26 Thread Steven Seeger via Xenomai
Everyone,

Sorry for the release spam. I accidentally pushed all tags to the tag server 
instead of just the one I was working on.

Guess I will buy donuts if I ever make it out to one of the meetings. :)

Steven






__ipipe_root_sync

2019-04-26 Thread Steven Seeger via Xenomai
Why was __ipipe_root_sync moved out of kernel/ipipe/core.c? I see it now in 
arch/arm/kernel/ipipe.c. It is the same exact code I had in the PPC branch in 
kernel/ipipe/core.c. I can move it to the arch-specific code, but was wondering 
why.

Steven





Re: [PATCH] cobalt/kernel: Simplify mayday processing

2018-11-05 Thread Steven Seeger via Xenomai
On Monday, November 5, 2018 7:20:33 AM EST Jan Kiszka wrote:
> 
> I would appreciate if you could test ARM64 and PowerPC for me. Until we
> have QEMU test images for both, it's still tricky for me to do that.

I have something I've got to get done before I can do anything else, but once 
that's done I can take a look a this on a PowerPC board.

Steven






Re: [Xenomai] Does RTNET supports TCP?

2018-10-08 Thread Steven Seeger
On Monday, October 8, 2018 2:49:55 PM EDT Sebastian Smolorz wrote:
> 
> Hm ... then why don't you fetch your files in non-realtime context? This
> would be much easier I suppose.

Yes I agree. Based on what Phong has said, it would make more sense to fetch 
the files in a non-realtime context and then use some realtime primitive (e.g., 
pipe) to send any data from those files to a necessary realtime context.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Does RTNET supports TCP?

2018-10-08 Thread Steven Seeger
On Saturday, October 6, 2018 4:07:49 AM EDT Sebastian Smolorz wrote:
> Do you mean kernel/drivers/net/doc/README.drvporting?

Yes, this was it. Thanks.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Does RTNET supports TCP?

2018-10-06 Thread Steven Seeger
On Thursday, October 4, 2018 9:02:46 PM EDT Pham, Phong wrote:
> Hi,
> 
> I noticed starting Xenomai 3.0.1 until now (3.0.7), there is net/ in
> .../kernel/drivers/ and in ../kernel/drivers/net/stack/ipv4/Kconfig
> 
> # source "drivers/xenomai/net/stack/ipv4/tcp/Kconfig"

I don't know why this is commented out. It looks like Gilles's initial import 
of this file in commit 106ffba7b55d506143966ff16158ee79b0007336 had it 
commented out.

I know that UDP is typically used with RTNET. TCP is complicated and has a lot 
of timers and dynamic state that makes it less than desirable for hard-
realtime systems. Probably Jan should chime in on this. I think he has the 
most experience using RTnet at this point.

> 5)  When I created a socket with rt_dev_socket(AF_INET, SOCK_STREAM, 0);
> and attempting to rt_dev_connect(fd, server_ip_addr,
> sizeof(server_ip_addr)), I get errno = 25 (Inappropriate ioctl for device).
>  Does it mean b/c TCP is not supported in RTnet and I attempt to connect
> via TCP (w/ socket SOCK_STREAM)?

Sorry I can't answer the other questions, as I am not working on rtnet myself. 
However, I would suspect that since TCP doesn't compile that is why 
SOCK_STREAM is not working. Try SOCK_DGRAM and see if it works. If not, then 
are you sure you have rtnet drivers for your network device compiled and ready 
to use? Remember, the driver for your network device must be realtime-safe. I 
recall seeing a guide for some simple changes to make to a Linux ethernet 
driver to use as a starting point for porting it to RTNET, but I can't seem to 
find it. Anyone?

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Reg:Xenomaui support for Powerpc p1022

2018-06-12 Thread Steven Seeger
On Tuesday, June 12, 2018 9:34:21 AM EDT Alexander Voytov wrote:
> What about P2020 and xenomai? Did anyone ported xenomai on fsl p2020?
> Alex

You're asking the wrong questions. p1020 and p2020 are e500 processors, and 
the e500 family is supported in xenomai. I have an 8548 board (also e500) 
which I use for testing. Just no SMP on that one.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Reg:Xenomaui support for Powerpc p1022

2018-06-12 Thread Steven Seeger
On Monday, June 11, 2018 7:10:10 AM EDT Sureshvs wrote:
> Hi
> 
> 
> I am using freescale P1022 PowerPC based processor board.xenomai support is
> available for p1022 ? In your website, suported architectures for
> powerpc1022 is not mentioned.
> 
> Kindly share the details.

Hi Suresh. I'm currently the powerpc maintainer for xenomai. I will soon have 
my hands on a p1020 board and will be using it for (among other things) some 
xenomai testing. I will be able to help you at that point if you need it. 
Please stay in touch with your experiences.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Introducing Dovetail

2018-06-05 Thread Steven Seeger
On Tuesday, June 5, 2018 6:48:23 AM EDT Philippe Gerum wrote:
> This is a heads-up about an ongoing work I hinted at last year [1]
> codenamed "Dovetail", which has just reached a significant milestone. In
>  short, an overhauled implementation of the interrupt pipeline is now
> capable of delivering short response times reliably on the ARM SoCs I'm
> working on (mainly i.MX(6q|7d) so far), running the recent 4.17 kernel
> release.



Hi Philippe.

Looks like there is just one branch here. Assuming this is the future, is the 
goal here to have a generic branch and arch-specific branches again?

Steven



___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Unrecoverable FP Unavailable Exception 801

2018-04-29 Thread Steven Seeger
Hello all. I've created what Philippe thinks is a good patch for this issue as 
well as looked at other cases of clobbered bits in the MSR as it relates to 
vsx, spe, and altivec. I have checked in a patch on our interim 4.14 IPIPE 
work which is not yet public. It looks trivial to backport this to 4.9. I am a 
new maintainer even though I've been a thorn in Philippe's side for well over 
a decade. Once he takes a minute from his busy life to grant me access to push 
a change for 4.9 I will do so.

In the meantime, for anyone who would like to see this issue patched on 4.9.51 
against the latest ipipe-core-4.9.51-powerpc-3.patch this should work:

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index ed47cc3..866983a 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -172,6 +172,7 @@ void giveup_fpu(struct task_struct *tsk)
msr_check_and_set(MSR_FP);
__giveup_fpu(tsk);
msr_check_and_clear(MSR_FP);
+flags &= MSR_FP;
hard_cond_local_irq_restore(flags);
 }
 EXPORT_SYMBOL(giveup_fpu);
@@ -204,6 +205,11 @@ void flush_fp_to_thread(struct task_struct *tsk)
 */
BUG_ON(tsk != current);
giveup_fpu(tsk);
+
+   /* giveup_fpu clears the MSR_FP bit from MSR
+* unconditionally
+*/
+   flags &= ~MSR_FP;
}
hard_preempt_enable(flags);
}
@@ -219,6 +225,7 @@ void enable_kernel_fp(void)
flags = hard_cond_local_irq_save();
 
cpumsr = msr_check_and_set(MSR_FP);
+   flags |= MSR_FP; /* must exit this routine with MSR_FP bit set */
 
if (current->thread.regs && (current->thread.regs->msr & MSR_FP)) {
check_if_tm_restore_required(current);
@@ -285,6 +292,7 @@ void enable_kernel_altivec(void)
 
flags = hard_cond_local_irq_save();
cpumsr = msr_check_and_set(MSR_VEC);
+   flags |= MSR_VEC; /* must exit this routine with MSR_VEC set in MSR */
 
if (current->thread.regs && (current->thread.regs->msr & MSR_VEC)) {
check_if_tm_restore_required(current);
@@ -317,6 +325,10 @@ void flush_altivec_to_thread(struct task_struct *tsk)
if (tsk->thread.regs->msr & MSR_VEC) {
BUG_ON(tsk != current);
giveup_altivec(tsk);
+   /* giveup_altivec() clears MSR_VEC
+* unconditionally from MSR
+*/
+   flags &= ~MSR_VEC;
}
hard_preempt_enable(flags);
}
@@ -405,6 +417,10 @@ void flush_vsx_to_thread(struct task_struct *tsk)
if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) {
BUG_ON(tsk != current);
giveup_vsx(tsk);
+   /* giveup_vsx() clears MSR_FP,VEC,VSX unconditionally
+* so clear them in flags
+*/
+   flags &= ~(MSR_FP|MSR_VEC|MSR_VSX);
}
hard_preempt_enable(flags);
}
@@ -436,6 +452,7 @@ void giveup_spe(struct task_struct *tsk)
msr_check_and_set(MSR_SPE);
__giveup_spe(tsk);
msr_check_and_clear(MSR_SPE);
+flags &= MSR_SPE;
hard_cond_local_irq_restore(flags);
 }
 EXPORT_SYMBOL(giveup_spe);
@@ -448,7 +465,8 @@ void enable_kernel_spe(void)
 
flags = hard_cond_local_irq_save();
msr_check_and_set(MSR_SPE);
-
+   /* must exit this routine with MSR_SPE set in MSR */
+   flags |= MSR_SPE;
if (current->thread.regs && (current->thread.regs->msr & MSR_SPE)) {
check_if_tm_restore_required(current);
__giveup_spe(current);
@@ -467,6 +485,10 @@ void flush_spe_to_thread(struct task_struct *tsk)
BUG_ON(tsk != current);
tsk->thread.spefscr = mfspr(SPRN_SPEFSCR);
giveup_spe(tsk);
+   /* giveup_spe clears MSR_SPE from MSR, so must clear
+* it here to exit rouitine properly
+*/
+   flags &= MSR_SPE;
}
hard_preempt_enable(flags);
}
@@ -531,6 +553,7 @@ void giveup_all(struct task_struct *tsk)
 #endif
 
msr_check_and_clear(msr_all_available);
+flags &= ~msr_all_available;
hard_cond_local_irq_restore(flags);
 }
 EXPORT_SYMBOL(giveup_all);
@@ -563,6 +586,7 @@ void restore_math(struct pt_regs *regs)
}
 
msr_check_and_clear(msr_all_available);
+flags &= ~msr_all_available;
hard_cond_local_irq_restore(flags);
 
regs->msr = msr;
@@ -1225,6 +1249,8 @@ struct task_struct *__switch_to(struct task_struct 
*prev,
 
/* Save FPU, Altivec, VSX and SPE state */

Re: [Xenomai] Unrecoverable FP Unavailable Exception 801

2018-04-25 Thread Steven Seeger
On Wednesday, April 25, 2018 12:49:53 PM EDT Philippe Gerum wrote:
> The fix makes a lot of sense, thanks. This bug slipped under the radar
> for years likely because enabling the ppc fpu in kernel context mainly
> happens when fixing up alignment issues, which rt apps tend to avoid in
> the first place for performance reason by only using aligned memory
> accesses (synchronous exceptions are not that cheap latency-wise).
> 
> Regarding the ipipe-4.9.y series, there are several other spots touching
> the msr in this file which may be affected the same way (vsx, altivec,
> spe, anything that involves calling msr_check_and_set/clear_msr() under
> hard masking basically).
> 
> Adding a dedicated irq_restore helper which does not touch any other bit
> aside of MSR_EE would make sense there. The BOOK-E version of
> irq_save/restore specifically uses the wrtee* instructions not to touch
> those bits, so we may assume this would be semantically correct to do
> the same for BOOK3-S.
> 
> PS: CCing Steven who took over the maintenance of the ppc pipeline from
> kernel 4.14 and on.

Hi guys. Thanks for pointing this out, Jouko. I've looked over this briefly 
and agree this is a pretty wide-spread problem. I will take a stab at it 
tomorrow.

Regards,
Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Unrecoverable FP Unavailable Exception 801

2018-04-04 Thread Steven Seeger
On Wednesday, April 4, 2018 11:42:08 AM EDT Sagi Maimon wrote:
> Thanks for your reply.
> You are right, but unfortunately in my ssytem it happens on the Linux side:
> The Linux kernel code that fixes the alignment during align exception (600)
> works most of the time, but sometimes during the alignment fix 
> "Unrecoverable FP Unavailable Exception" occurs. My suspicion is that the
> xenomai still holds the FPU, but I not familiar enough with the xenomi
> code.
> 
> Any how I still trying to make it occur on the xenomai side.

Narrow it down to a simple few lines that cause the error, and put it in a 
shadowed xenomai thread and see if the error still occurs. This will help 
eliminate the possibility of any xenomai/linux interaction. Of course, you can 
just put all your energy into testing with an upgraded kernel and xenomai, 
too.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Unrecoverable FP Unavailable Exception 801

2018-04-04 Thread Steven Seeger
On Wednesday, April 4, 2018 8:57:31 AM EDT Sagi Maimon wrote:
> HI,
> I did it only from Linux side.
> I can try doing it from xenomai side too.
> Please explain how would it help to investigate the problem?
>

If the problem does not occur in a regular linux thread, but only occurs in a 
xenomai-scheduled thread, then it drastically narrows down where the problem 
might be. Plus, you get a further test if you upgrade yuor kernel and ipipe 
patch.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Unrecoverable FP Unavailable Exception 801

2018-04-03 Thread Steven Seeger
On Tuesday, April 3, 2018 2:31:38 PM EDT Lennart Sorensen wrote:
> On Tue, Apr 03, 2018 at 09:57:52AM -0400, Steven Seeger wrote:
> > I'm a little curious about the fact that the address this occurred at is
> > 0xc0003858, which suggests it's kernel code. As Philippe asked, what
> > version of the ipipe are you using? Also, what is your exact CPU? The
> > e300c2 does not have an FPU, so this would be quite a configuration
> > problem.
> 
> Did I miss e300c2 being mentioned?  If so, that would be a problem.

He did not mention the e300c2 but I just wanted to clarify. From a private 
email, it sounds like Sagi is working on a legacy product from before his time 
at the company that has periodically had this problem and required their 
customers to reboot it.

This brings back many "fond" memories of Gilles helping me chase down a 
similar bug on the Geode about ten years ago.

Sagi is using some extremely old stuff, so if he has the capability to put a 
fix in and recompile he might as well update the kernel if possible.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Unrecoverable FP Unavailable Exception 801

2018-04-03 Thread Steven Seeger
On Monday, April 2, 2018 10:54:54 AM EDT Sagi Maimon  
wrote:

> Hi all,
> I am working with: Powerpc e300 SOC.
> I am using: xenomai-2.6.4 version
> 
> I am experiencing this exception on my Linux :
> Unrecoverable FP Unavailable Exception 801 at c0003858
> Oops: Unrecoverable FP Unavailable Exception, sig: 6 [#1]
> 
> This happens during "alignment exception" (600)

I'm a little curious about the fact that the address this occurred at is 
0xc0003858, which suggests it's kernel code. As Philippe asked, what version 
of the ipipe are you using? Also, what is your exact CPU? The e300c2 does not 
have an FPU, so this would be quite a configuration problem.

Are you using any kernel threads that use the FPU?

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai community meeting 02.02.18

2018-01-10 Thread Steven Seeger
On Tuesday, January 9, 2018 6:22:02 AM EST Henning Schild wrote:
> Hi,
> 
> we have already talked about the "elephant in the room" and in that
> thread we also talked about a face-to-face meeting of current and
> future maintainers, contributers and users.
> 
> We have now agreed on a time and place for this meeting, and i want to
> announce it and invite the whole community to it.
> 
> Where: somewhere in Brussels (details will follow)
>and remotely, TelCo link will be published
> When: 02.02.18 13:00-18:00 (CET)
> What: we do not have a clear agenda yet, but there is probably enough
>   material to fill the timeslot
> Attendees: Philippe Gerum, Jan Kiszka and myself will be there, you as
>well?
> 
> That meeting is colocated with https://fosdem.org/2018/ if you are
> planning to attend in person, keep that in mind and maybe stay the
> weekend. Jan and me will be around until Sunday afternoon, Phillippe
> will leave Brussels on Saturday.
> 
> Sorry for the short notice, i still hope more people will attend in
> person or at least in the TelCo. We will rent a coworking space for the
> meeting, so the location depends on the number of participants.
> If you want to participate please answer this mail before 01/24/18.
> 
> Looking forward to the meeting!
> Henning

I won't have any way to attend personally, but I may be able to participate 
remotely. No promises though, as it's a normal workday for me.

Steven




___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] New year, new roles

2017-12-18 Thread Steven Seeger
Sorry I should say that "a team at Johnson Space Center" not the whole center 
itself. :)

Steven



___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] New year, new roles

2017-12-18 Thread Steven Seeger
I don't really want to congratulate Jan, because I think the progression of 
the project is very sad for all of us. (And I am sure he doesn't want to be 
congratulated for having more work to do -- haha.) I can't imagine things 
would have ever needed to change if Gilles was still with us.

Like Jan, I haven't been involved in the project must for years. I've been 
working at NASA since 2009 and except for some private-sector consulting here 
and there, hard-realtime Linux has been the furthest thing from my life... 
until recently.

Johnson Space Center is going to use Xenomai as part of their new manned-
spaceflight software reference platform going forward. It's a very exciting 
time and I seem to be the only person at the agency with any hard-realtime 
Linux experience. :)

I've never worked too closely with Jan but I've come across him via email 
several times throughout the years and I hope I can be of help to you both. 
He's definitely a smart guy and I respect him. 

Given Jan's involvement and knowledge I think he's a great choice to lead the 
effort.

I'm hoping the three of us can work together and make the project better! I 
suspect in the next year I will actually be paid to be more involved, so that 
makes it easier on me. :)

In other news, a defense/aerospace company has paid me to do a Linux BSP for 
them. That's been delivered. They will be providing Xenomai wtih it for their 
customers. So that could also be important for us. This company sells great 
boards for relatively low cost.

If there's anything I can do for you guys let me know.

Steven Seeger
Software Engineer
Embedded Flight Systems, Inc.
304-550-8800 (cell)

On Monday, December 18, 2017 8:48:08 AM EST Jan Kiszka wrote:
> On 2017-12-17 20:16, Philippe Gerum wrote:
> > Sixteen years after Gilles and I founded the Xenomai project, time has
> > come for me to hand over the leadership to a contributor recognized for
> > his skills, bringing fresh ideas, and a creative roadmap.
> > 
> > Jan Kiszka has agreed to take over the Xenomai leadership gradually from
> > me. He will be ramping up his involvement in project steering and
> > development head gatekeeping during a transition period starting today.
> > This period will end by September 2018 at the latest.
> > 
> > I will keep on maintaining the I-pipe for the ARM architecture, helping
> > in reviewing changes to the Xenomai code base too. Generally speaking,
> > I'll do my best to transfer knowledge about the Xenomai implementation
> > to people or organizations willing to contribute to the project.
> > 
> > Steven Seeger has volunteered to maintain the I-pipe support for the
> > ppc32 architecture, and I'll be working with him to ensure a smooth
> > transition here too.
> > 
> > Jan's team at Siemens is ramping up their current involvement in the
> > I-pipe x86 port.
> > 
> > PS: The project is still looking for someone who would be willing to
> > maintain the I-pipe arm64 port in the long run.
> > 
> > Thanks,
> 
> First of all, a big thank you, Philippe, for investing so much of your
> time and energy into the project over the past 16 years! Not many folks
> probably realized how much that was (and still is) and how much they
> profit(ed) from this.
> 
> When you asked me to take over, I felt honored but also highly
> challenged to lead the project in this tricky phase. I'll try my best.
> 
> My goal is clearly not to replace you (which would be impossible
> anyway), but to make us, the community, a bigger and more active family.
> We need more contributions and more commitment to keep the project
> rolling. So I'll specifically focus on reshaping our scope, clarifying
> what we want and can deliver in high quality also in the future - and
> who can support the associated efforts.
> 
> As you said, we want to make the transition as smooth as possible. So I
> also do not want to jump in directly but rather start with small steps.
> I wasn't that active in the community anymore as some years ago, and I'm
> lacking recent overview over corners that we are not in touch with in
> our internal projects, like archs != x86 or Mercury. That needs to change.
> 
> As one of the first community-building steps, we are considering to have
> a meet-up around FOSDEM. Henning will follow-up on this soon. I'm also
> planning for some talk or BoF at ELC in Portland next year (provided I
> get a slot), to spread plans and ideas and to have further face-to-face
> discussions.
> 
> Jan





___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


[Xenomai] ppc32 users

2017-12-06 Thread Steven Seeger
Is anyone actively using xenomai on ppc32? I'm going to be doing the 4.14 
migration of the IPIPE to ppc32 and was hoping for some volunteers to help 
test.

Steven

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [RFC] RTnet, Analogy and the elephant in the room

2017-11-23 Thread Steven Seeger
> Hi Steven,
> 
> I have remote access to many ppc boards and could help here. This said
> 85xx, 40x and 44x already cover much of the ppc32 scope running Xenomai
> - maybe adding mpc52xx would be good, I have one here. I don't see much
> traction these days for Xenomai over ppc64, so I'm not even sure whether
> this port is still relevant.

I have never used ppc64 so I would take some time coming up to speed with it 
(and would need access to a board.) I am sure you can tell from all our 
private emails over the last couple years that I wouldn't have a problem with 
ppc32. 

I am unsure how remote access to boards works especially at this level of 
work. Is there some way to power on and off if needed or do they depend on a 
watchdog? We can discuss this offline.

Steven


___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] [RFC] RTnet, Analogy and the elephant in the room

2017-11-22 Thread Steven Seeger
Philippe, as we have previously discussed I am willing to take over the PPC i-
pipe maintenance on my personal time if it will help. My only issue is right 
now the only PPC board I have in my posseession is an 8548-based board. I 
might be able to borrow some 405 and 440-based boards from work if needed. Of 
course, we can always depend on the grateful Xenomai users for test and 
feedback. :)

I am not sure my group is going to pursue RTNET otherwise I would have been 
happy to help there. I think negotations are still open on that one, though.

If we ever get around to releasing my microblaze i-pipe patch then I am happy 
to maintain that as well. :) Feel free to email me privately if you want to 
discuss.

I really miss Gilles and maybe I can help carry on his memory in some small 
way. Hopefully he is in heaven where there are no geode CPUs to chase FPU bugs 
on :)

Steven


___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Gilles Chanteperdrix, 1975-2016

2016-08-16 Thread Steven Seeger
Words cannot adequately describe the loss of Gilles for our hard realtime 
Linux community. I have communicated with him for years, and always had the 
utmost respect for him.He took the time to troubleshoot a strange FPU issue on 
the Geode with me back in early 2007. We spent weeks going back and forth and 
he figured it out. (I claim no credit for figuring it out!) He was a brilliant 
man and someone I considered a friend and colleague. I know that he would want 
us to continue  to make Xenomai better and further proliferate it in industry. 
Let's do so with Gilles in our hearts.

Steven 

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] userspace absolute timer value

2015-12-23 Thread Steven Seeger
On Wednesday, December 23, 2015 19:43:41 Gilles Chanteperdrix wrote:
> If I understand correctly, your problem is that struct timespec
> tv_sec member has 32 bits. Well, I am afraid there is not much we
> can do about that (I heard mainline has a plan to switch to a new
> timespec with a 64 bits tv_sec, but I do not know how much of that
> plan has been implemented).

Yes, this is exactly my problem.

> 
> Can you not call clock_settime to set a wallclock offset which will
> at least allow CLOCK_REALTIME to behave as expected ?

The issue is with the testsuite/latency app. It uses 
clock_gettime(CLOCK_MONOTONIC) and adds a millisecond to that value and then 
uses that as the absolute start time of the latency thread. All calculates are 
based off this primed value. 

There really is no reason for my board to come up with such a ridiculous 
timebase value. I have no idea why it does that. I set it to 0 very early in 
the kernel boot cycle and it fixed the issue. (This board is loaded via jtag so 
there may be some weirdness there.) This fix will last 136 years, right? :) My 
point was just that if the timebase is not a reasonable value I think this bug 
will manifest. 

IMHO there is no benefit to allowing us to say we want some task to start in 
the year 500,000,000,000 so there isn't really a need for such large numbers 
in this one use-case.

Your idea of a fix is essentially correct, and should work across all systems. 
However, I was trying to run the standard latency app which should also work 
across all systems! :)

Steven


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] userspace absolute timer value

2015-12-23 Thread Steven Seeger
All,

The issue that I had with userspace absolute time to start a timer (what 
latency test does) was due to a quirk on my board where the powerpc timebase 
was coming up as 0xdXXX which was causing the 32-bit userland to 
lose precision when getting the monotonic clock value. The latency test gets 
the time, adds a millisecond, and uses this time to start the process. However 
on my machine the time was way off due to the loss of precision. (there were 
more than 2^32 seconds, but time_t is only 32-bit) On my board adding some 
code to set the timebase to 0 in head_44x.S and that cleared up all the 
issues. Everything is working for me now. This appears to be a problem with 
how cobalt deals with 64-bit ns counters and 32-bit userspace clocks, however 
I could be missing something.

Steven


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] userspace absolute timer value

2015-12-14 Thread Steven Seeger
Since my last post I seem to have solved the issues with my ppc44x board hard 
locking up. I've relayed this info to Philippe and hopefully he will confirm 
that I'm correct and that I should make a patch. However in the process, I've 
stopped seeing the latency -t1 and latency -t2 work correctly.

One thing I do notice now with latency -t0 is that the timerfd_handler in 
/proc/xenomai/timer/coreclk shows a tremendous number of seconds (1bil+) and 
you can keep printing the output and watching it count down a second at a 
time. This means there may be some kind of discrepancy between the 
CLOCK_MONOTONIC and the timer that's used to program shots.

I did look at the ticks for the coreclock and it appears to be 400 ticks per 
microsecond which is what the cobalt core is reporting via 
xnclock_ns_to_ticks() (I pass it 1000 ns and get 400 as a result)

Can anyone point me in the direction of where to look for this issue?

Thanks,
Steven


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] powerpc 440 userspace latency test does nothing

2015-12-05 Thread Steven Seeger
More info on this.

timer_gettime() on latency's timerfd returns 0 seconds and 1 nanosecond, 
indicating the timer has not expired. No matter how much time I sleep before 
this the result is the same. 

The output of cat /proc/xenomai/timer/coreclk:

SCLINUX: / # cat /proc/xenomai/timer/coreclk 
CPU  SCHED/SHOTTIMEOUT INTERVALNAME
02320/1334 1ms523us-   [host-timer]
02/2   -   -   latency
01/0   -   100us   timerfd_handler

The "latency" with 2/2 is a couple of sleep(1) I added for debugging purposes 
during the beginning of the program.

I've modified latency to look like this:

133 err = clock_gettime(CLOCK_MONOTONIC, );
134 if (err)
135 error(1, errno, "clock_gettime()");
136 
137 printf("expected is: %Ld %ld\n", (long long int)expected.tv_sec, 
expected.tv_nsec);
138 sleep(1);
139 err = clock_gettime(CLOCK_MONOTONIC, );
140 printf("expected is: %Ld %ld\n", (long long int)expected.tv_sec, 
expected.tv_nsec);
141 
142 fault_threshold = CONFIG_XENO_DEFAULT_PERIOD;
143 nsamples = (long long)ONE_BILLION / period_ns;
144 /* start time: one millisecond from now. */
145 expected.tv_nsec += 100;
146 if (expected.tv_nsec > ONE_BILLION) {
147 expected.tv_nsec -= ONE_BILLION;
148 expected.tv_sec++;
149 }
150 timer_conf.it_value = expected;
151 timer_conf.it_interval.tv_sec = period_ns / ONE_BILLION;
152 timer_conf.it_interval.tv_nsec = period_ns % ONE_BILLION;
153 
154 printf("expected is: %Ld %ld\n", (long long int)expected.tv_sec, 
expected.tv_nsec);
155 err = timerfd_settime(tfd, TFD_TIMER_ABSTIME, _conf, NULL);
156 if (err)
157 error(1, errno, "timerfd_settime()");
158 
159 sleep(1);
160 err = timerfd_gettime(tfd, _conf);
161 if(err) error(1, errno, "steven temp");
162 printf("timer got time %Ld %ld\n", (long long 
int)timer_conf.it_value.tv_sec, timer_conf.it_value.tv_nsec);

Which yields the output:

expected is: 1582352598 771936946
expected is: 1582352599 772083521
expected is: 1582352599 773083521
timer got time 0 1

I've also confirmed that timerfd_read() reaches xnsynch_sleep_on

Seeing as how coreclk is also used for the host-timer and that works correctly 
I am totally stumped. Anybody have any ideas? :)

Steven



___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] powerpc 440 userspace latency test does nothing

2015-12-05 Thread Steven Seeger
Hey guys.

I am running a virtex5 ml507 board from Xilinx configured to use the internal 
powerpc 440 processor in the FPGA. I compiled a 3.18.20 kernel and patched it 
and build xenomai 3.0.1.

I am able to run latency -t 1 and latency -t 2

When I try to run just latency, however, I get no results. It seems like 
userspace threads are not ever waking up. Here's the result of cat 
/proc/xenomai/sched/stat:

SCLINUX: / # cat /proc/xenomai/sched/stat
CPU  PIDMSWCSWXSCPFSTAT   %CPU  NAME
  0  0  0  32402140  0 00018000  100.0  [ROOT]
  0  8981  1  5  0 000680c00.0  latency
  0  9001  2  5  0 000680420.0  display-898
  0  9012  3  7  0 0004c0420.0  sampling-898
  0  0  0  32450520  0 0.0  [IRQ512: 
[timer]]

Before I spend too much time tearing into this I wanted to ask if anyone's seen 
this issue before.

It should be noted that clocktest runs fine. switchtest runs. xeno-test just 
sits there though. 

Thanks,
Steven


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] powerpc 440 userspace latency test does nothing

2015-12-05 Thread Steven Seeger
Just to add more info, the issue is that the read() call from the timerfd 
never returns. I've never seen this occur before. 

Steven

On Saturday, December 05, 2015 03:02:38 Steven Seeger wrote:
> Hey guys.
> 
> I am running a virtex5 ml507 board from Xilinx configured to use the
> internal powerpc 440 processor in the FPGA. I compiled a 3.18.20 kernel and
> patched it and build xenomai 3.0.1.
> 
> I am able to run latency -t 1 and latency -t 2
> 
> When I try to run just latency, however, I get no results. It seems like
> userspace threads are not ever waking up. Here's the result of cat
> /proc/xenomai/sched/stat:
> 
> SCLINUX: / # cat /proc/xenomai/sched/stat
> CPU  PIDMSWCSWXSCPFSTAT   %CPU  NAME
>   0  0  0  32402140  0 00018000  100.0  [ROOT]
>   0  8981  1  5  0 000680c00.0  latency
>   0  9001  2  5  0 000680420.0 
> display-898 0  9012  3  7  0 0004c042   
> 0.0  sampling-898 0  0  0  32450520  0 
>0.0  [IRQ512: [timer]]
> 
> Before I spend too much time tearing into this I wanted to ask if anyone's
> seen this issue before.
> 
> It should be noted that clocktest runs fine. switchtest runs. xeno-test just
> sits there though.
> 
> Thanks,
> Steven

Steven Seeger
Software Engineer Codes 443/444/582
Embedded Flight Systems, Inc.
304-550-8800 (cell)
301-286-5641 (office)


___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] powerpc 440 userspace latency test does nothing

2015-12-05 Thread Steven Seeger
Me again. The issue appears to be that there's a discrepancy between the value 
returned by clock_gettime(CLOCK_MONOTONIC... and the cobalt kernel's internal 
notion of time. Changing latency to start with a 100 nanosecond relative time 
(which adds some constant latency to the expected result since both the 
expected and actual are no longer based on the same point in time) allows it 
to run successfully.

It seems if I telnet into the board and generate too much text on the screen 
the board locks up. I don't know if the board has any thermal cutoff (it gets 
very hot) or if something is wrong with the kernel. I manually patched in some 
vendor code from a 3.6 kernel into the 3.18.20 kernel to use the xenomai patch 
so I could be missing something, too.

I'm not sure if this makes a difference, but here is clock_gettime's returned 
tv_sec and tv_nsec (with a space in the middle) and the line after that is an 
immediate read of ticks: from /proc/xenomai/clock/coreclk:

1582347100 704906148
 ticks: 15136959121354131671

You can see there's an extra digit in the coreclk output compared to what 
comes out of clock_gettime.

I went ahead and used the ipipe-core-3.14.39-powerpc-8 patch and built the 
vendor's 3.14.2 kernel against it. I didn't have to do as much work as I did 
with the 3.18.20 patch. I had the same results (both with latency and with 
freezing.)

At this point I've got the answers I need from this exercise, but it would be 
good to understand the problem and possibly fix it. If anyone can point me in 
the right direction maybe I can take a look at it.

Steven

___
Xenomai mailing list
Xenomai@xenomai.org
http://xenomai.org/mailman/listinfo/xenomai


[Xenomai] xenomai in vmware smp crash

2012-11-29 Thread Steven Seeger
Gentlemen,

 

Hello! It's been many years since I've been on the list. I still dabble in
Xenomai when jobs come up. I wish I had more need to use it so I can
contribute. I've missed my friendly back and forth emails with Gilles on
geopolitical issues. 

 

I was asked by a client to patch an Ubuntu 64-bit VM in VMWARE with xenomai
to use as a development platform. It obviously is not going to be realtime
and nobody is concerned about that. I built a kernel with SMP mode but have
just one core for my VM, and everything works fine. However, if I give my VM
more than one processor or core, I get a kernel panic after about 6-8
seconds after bootup when the login GUI appears.

 

I set up the kernel to use a serial console and captured the panic:

 

[   17.062448] [ cut here ]

[   17.065610] kernel BUG at arch/x86/kernel/ipipe.c:592!

[   17.069109] invalid opcode:  [#1] SMP 

[   17.070161] CPU 0 

[   17.070575] Modules linked in: bnep rfcomm bluetooth parport_pc ppdev
snd_ens1371 gameport snd_ac97_codec ac97_bus snd_pcm snd_seq_midi
snd_rawmidi snd_seq_midi_event snd_seq joydev snd_timer snd_seq_device
mac_hid snd vmw_balloon psmouse soundcore snd_page_alloc serio_raw i2c_piix4
vmwgfx ttm drm shpchp lp parport usbhid hid e1000 mptspi mptscsih mptbase
vmw_pvscsi vmxnet3

[   17.079691] 

[   17.080167] Pid: 0, comm: swapper/0 Not tainted 3.2.31xenomai #2 VMware,
Inc. VMware Virtual Platform/440BX Desktop Reference Platform

[   17.083084] RIP: 0010:[8101de4e]  [8101de4e]
__ipipe_handle_irq+0x1be/0x1c0

[   17.085104] RSP: 0018:81a03e20  EFLAGS: 00010286

[   17.086262] RAX: da80 RBX:  RCX:


[   17.088014] RDX: ffdf RSI: 81a03e58 RDI:
81a03e38

[   17.089693] RBP: 81a03e40 R08:  R09:


[   17.091284] R10:  R11:  R12:
da80

[   17.092850] R13: 81a03e38 R14: 88003be0 R15:


[   17.094562] FS:  () GS:88003be0()
knlGS:

[   17.096556] CS:  0010 DS:  ES:  CR0: 8005003b

[   17.098080] CR2: 7f7f2e678eb0 CR3: 38df4000 CR4:
06f0

[   17.099696] DR0:  DR1:  DR2:


[   17.101322] DR3:  DR6: 0ff0 DR7:
0400

[   17.103029] Process swapper/0 (pid: 0, threadinfo 81a0, task
81a0f020)

[   17.104932] Stack:

[   17.105400]  81a03fd8 81ad3920 


[   17.107190]  81a03ee8 815c6d5d 81a03e58
81a03ee8

[   17.109004]  810605ad  


[   17.111037] Call Trace:

[   17.112251]  [815c6d5d] irq_move_cleanup_interrupt+0x5d/0x90

[   17.114259]  [810605ad] ? get_next_timer_interrupt+0x1cd/0x260

[   17.115958]  [8101d47a] ? __ipipe_halt_root+0x2a/0x40

[   17.117321]  [8100a5e3] default_idle+0x53/0x1d0

[   17.118611]  [81001236] cpu_idle+0xe6/0x130

[   17.119752]  [81596afe] rest_init+0x72/0x74

[   17.120933]  [81b32bdb] start_kernel+0x3e9/0x3f6

[   17.122203]  [81b32322] x86_64_start_reservations+0x132/0x136

[   17.123723]  [81b3245b] x86_64_start_kernel+0x135/0x13c

[   17.125223] Code: 0f 1f 44 00 00 48 83 a0 70 07 00 00 fe 4c 89 ee bf 20
00 00 00 e8 13 c8 0a 00 e9 f3 fe ff ff 89 d3 be 01 00 00 00 e9 a6 fe ff ff
0f 0b 55 48 89 e5 53 48 81 ec b8 00 00 00 66 66 66 66 90 9c 5b 

[   17.131844] RIP  [8101de4e] __ipipe_handle_irq+0x1be/0x1c0

[   17.133398]  RSP 81a03e20

[   17.134295] ---[ end trace bc97edd2d31fbe38 ]---

[   17.135365] Kernel panic - not syncing: Attempted to kill the idle task!

[   17.136838] Pid: 0, comm: swapper/0 Tainted: G  D  3.2.31xenomai
#2

[   17.138503] Call Trace:

[   17.139159]  [815b44fb] panic+0x91/0x1a2

[   17.140313]  [8101d542] ? __ipipe_do_IRQ+0x82/0xa0

[   17.141809]  [81053904] do_exit+0x784/0x870

[   17.143129]  [815be39b] ? _raw_spin_unlock_irqrestore+0x1b/0x30

[   17.144692]  [8105106c] ? kmsg_dump+0x5c/0xf0

[   17.145944]  [815bf37f] oops_end+0xaf/0xf0

[   17.147049]  [810057d8] die+0x58/0x90

[   17.148112]  [815becb4] do_trap+0xc4/0x170

[   17.149317]  [81002db5] do_invalid_op+0x95/0xb0

[   17.150516]  [8101de4e] ? __ipipe_handle_irq+0x1be/0x1c0

[   17.151979]  [81056d7c] ? irq_exit+0x7c/0xb0

[   17.153308]  [815c944c] ? do_IRQ+0x6c/0xf0

[   17.154408]  [8101d84f] __ipipe_handle_exception+0x11f/0x2a0

[   17.155941]  [815c88ac] invalid_op+0x1c/0x60

[   17.157201]  [8101de4e] ? __ipipe_handle_irq+0x1be/0x1c0

[   17.158767]  [815c6d5d] irq_move_cleanup_interrupt+0x5d/0x90

[   17.160266]  

Re: [Xenomai] xenomai in vmware smp crash

2012-11-29 Thread Steven Seeger
I should mention that this is 3.2.31 and it's using the ipipe 3.2.21-x86-1
patch.



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] xenomai in vmware smp crash

2012-11-29 Thread Steven Seeger
Disregard my email. I did search before posting, but found nothing. I did
another search after posting for ipipe.c:592 and found it.

This patch solved my problem:

http://www.mail-archive.com/xenomai@xenomai.org/msg01010.html

Thanks,
Steven


-Original Message-
From: Steven Seeger [mailto:ssee...@mpl.com] 
Sent: Thursday, November 29, 2012 9:42 AM
To: xenomai@xenomai.org
Subject: Re: [Xenomai] xenomai in vmware smp crash

I should mention that this is 3.2.31 and it's using the ipipe 3.2.21-x86-1
patch.



___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai