[Xenomai-core] Re: Benchmarking Plan
Philippe Gerum wrote: This is a partial roadmap for the project, composed of the currently Ah! I just _knew_ you would jump in as expected. The teasing worked :o) well done ! Its the mark of a great leader to get folks to do what he wants, while making them think its their idea ;-) (and I imagine thats why you ccd Takis too :-) [lots of snippage, thruout] LiveCD has a few weaknesses though: - cant test platforms w/o cdrom I also think that's a serious issue. Aside of the hw availability problem (e.g. non-x86 eval boards), having to burn the CD is one step too many when time is a scarce resource. It often prevents to run it as a fast check procedure even in the absence of any noticeable problem. IOW, you won't burn a CD to run the tests unless you are really stuck with some issue. So a significant part of the interest of having a generic testsuite is lost: you just don't discover potential problems before the serious breakage is already in the wild. One thing that would help expand LiveCD's usefullness is to be able to : - mount pirt.iso in loopback on a host (my laptop), - export it via NFS to box-under-test, - use pxelinux to feed LiveCD's kernel(s?) to box when it boots. I tried to do this, and IIRC ran into trouble with absolute symlinks from /etc.ro to /etc. The absoluteness fouls things when the ISO is mounted on forex: /media/cd. I poked a bit at trying to convince NFS to resolve them as if they were used within a chroot jail, but I dont know enough about that. - manual re-entry of data is tedious, - no collection of platform data (available for automation) - spotty info about cpu, memory, mobo, etc which is largely user-supplied, so it can be wrong. - no unattended test (still true?) - unfiltered preposterous data. Sometimes, data sent are just rubbish because of well-known hw-related dysfunctioning or misuse of the LiveCD. This perturbates the results uselessly. Any ideas on how to reject these outliers ? (defer til we have statistical analysis in place ?) - difficulties so far to really get a sensible digested information out of the zillions of results, aside of very general figures (e.g. best performer). But this is more an issue of lack of data post-processors than of the LiveCD infrastructure itself. yep. And we *need* platform data to start to categorize them by platform, important config choices, etc. We should see narrower ranges of results, and be more able to reject the junk. Additionally, LiveCD is a really great tool when it comes to help people figuring out whether their respective box or brain have a problem with the tested software, i.e. by automatically providing a sane software (kernel+rtos) configuration and the proper way to run it quite easily, a number of people could determine if their current lack of luck comes from their software configuration, or rather from a more serious problem. yeah. pre-built world saves a lot of early thrashing. - testsuite/cruncher ? The cruncher measures the impact of using the interrupt shield, but this setting is now configured out by default since a majority of people don't currently need it. Shield cost/performances are still useful to know though. OK. adding 1 call to cruncher is simple. Over time we *may* collect enough data to make some A (shields up!) vs B (shields down!) comparisons. But I dont see the data to distinguish A, B - dont we need the xeno/ipipe equivalent of /proc/config.gz to do this ? wrt testsuite/README cruncher notes, is this useful info ? (manual insmods here...) soekris:/usr/realtime/2.6.14-ski9-v1/testsuite/cruncher# cruncher Calibrating cruncher...11773, done -- ideal computation time = 10023 us. 1000 samples, 1000 hz freq (pid=4183, policy=SCHED_FIFO, prio=99) Nanosleep jitter: min = 60 us, max = 192 us, avg = 77 us Execution jitter: min = 39 us (0%), max = 72 us (0%), avg = 51 us (0%) Segmentation fault soekris:/usr/realtime/2.6.14-ski9-v1/testsuite/cruncher# run * * * Type ^C to stop this application. * * Calibrating cruncher...11769, done -- ideal computation time = 10018 us. 1000 samples, 1000 hz freq (pid=4260, policy=SCHED_FIFO, prio=99) Nanosleep jitter: min = 62 us, max = 195 us, avg = 79 us Execution jitter: min = 46 us (0%), max = 77 us (0%), avg = 57 us (0%) 2. send your results to xenomai.testout-at-gmail.com Obviously, an official gna.org ML might be more appropriate. Will appear soon. should this wait til xeno-test is upgraded to produce good data ? ie prevent early bogus data from being submitted. As said before, the problem that currently exists with LiveCD's data, is that the results are cripled with irrelevant stuff, either because some people just tried it out over a simulator (ahem...), or had a serious hw-generated latency issue that basically made the whole run useless (mostly x86 issues: e.g. SMI stuff, legacy USB emulation, powermgmt, cpufreq arte
[Xenomai-core] xeno-test etc
folks, Ive been tinkering with xeno-test, adding a bunch of platform-info to support comparison of results from various platforms submitted by different xenomai users. - cat /proc/config.gz if -f /proc/config.gz - cat /proc/cpuinfo - cat /proc/meminfo - cat /proc/adeos/* foreach /proc/adeos/* - cat /proc/ipipe/* foreach /proc/ipipe/* - xeno-config --v - xeno-info - (uname -a is available in xeno-config or xeno-info, dont need separately) However, Ive gotten a bit bogged down in the workload mgmt parts; they dont work quite the way Id like, and bash is tedious to do job control in scripts. What I want: support for 2 separate test-scenarios, described by the latency cmdln options: if ( -T X>0) workload job termination is detected and restarted. keeps workload conditions uniform for duration of test not needed for default workload - dd if=/dev/zero never finishes. needed for if=/dev/hda1, since partitions are finite. (real devices produce interrupts, so they make a better/harder test) if ( -w1 and -T 0 ) workload termination should end the currently running latency-test. runtime of latency test can be realistically compared to the same workload running normally. this sort-of turns the test inside-out; the workload becomes the 'goal' and the latency tests are the load. There are 2 conflicting forces (in GOF sense) driving my thinking wrt this script. - We want to support busybox , /bin/ash - we want the above features (which I havent gotten working in bash/ash yet) - Ash doesnt support several bash features, including at least 1 used in xeno-test (array vars) - we want more features ?? Given the tedium of fixing the bash-script bugs, I ended up prepping 2 new experiments: - ripped most bash code out, leaving only job-control stuff. tinkered with it, but it still has problems. - wrote an 'equivalent' (to above) perl version which does job-control (seems ok) perl version can run arbitrary bash loops also: not just 'dd if=/dev/zero of=/dev/null' but also ' while true; do echo hey $$; sleep 5; done' or ' cd ../../lmbench; make rerun; done' The ash version: AFAICT, the sticking point is waiting for work-load tasks; shell's wait is a blocking call, so I cant use it to catch individual workload exits, but I cant wait for all 3 workloads to end b4 restarting any of them. (load uniformity) trapping sig CHLD almost works; I cant recover the child pid in the handler, but perhaps I dont need it.. When I test using a dd workload, Im getting spurious signals, and the sig-handler dumbly restarts it, but wo the pid, its hard to know whether the signalling process is really dying, or something else ( which is partly what happens ) The bad behavior Im seeing now is that: the sig-handler fires evry 5 sec, in the while 1 { sleep 5 } loop. This suggests that Im missing something important wrt the signals. SO: 0. is the inside-out test scenario compelling ? 1. can anyone see whats wrong with the ash version ? 2. do I need an intermediate 'restart & wait' process to restart each (possibly finite) workload, so main process can wait on all its children together (block til they all return) 3. can somone see a simpler way ? 4. if the bash script cant be fixed (seems unlikely), do we want a perl version too ? 5. umm tia jimc PS. with all the hard work going on, I feel a bit lazy sending 2 semi-broken script-snippets, but.. well, I *am* lazy. Im also sending a semi-working version of xeno-test, as promised weeks ago. Pls dont apply, but give a look-see. One 'controversial' addition is POD, (plain old documentation). I think its readable as it is, and it has the virtue of not being in a separate file, so its easier to maintain. For a little flame-bait, I added -Z option, which gives extended help (-H is taken by latency). PPS. long options would be nice, but is unsupported by getopts. To use them, we'd need to do so in both xeno-test, and *latency progs, since xeno-test passes latency options thru when it invokes *latency. Anyone seen a version that does long options, and would work on ash & bash ? ok, enough prattling. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 91) +++ scripts/xeno-test.in(working copy) @@ -7,8 +7,8 @@ -w spawn N workloads (dd if=/dev/zero of=/dev/null) default=1 -d used as alternate src in workload (dd if=$device ..) The device must be mounted, and (unfortunately) cannot - be an NFS mount a real device (ex /dev/hda) will - generate interrupts + be an NFS mount. A real device (ex /dev/hda) will + generate interrupts, /dev/zero,null will not. -W
[Xenomai-core] heres a go at an adeos-ipipe-2.6.15-i386-1.1-01.patch
hi Phillipe, everyone, happy 06 ! Out of curiosity, I applied adeos-ipipe-2.6.14-i386-1.1-01.patch on top of 15. the rejects were small, and simple enough looking, that even a lazy sod like myself might manually fix them, so I did. whats more, it built clean and booted ! I havent done anything more demanding than ls, df, etc, but hey, low hanging fruit tastes just as good / even better ;-) So heres hoping that you've not started this particular thankless task, and Ive saved your cycles for something more dependent on your particular talents. enjoy. jimc diff.try-15-ipipe-101.20060104.170829.bz2 Description: application/bzip ./arch/i386/kernel/io_apic.c.rej ./include/linux/preempt.h.rej ./init/main.c.rej ./kernel/irq/handle.c.rej ./kernel/Makefile.rej *** *** 1313,1322 /* * Add it to the IO-APIC irq-routing table: */ - spin_lock_irqsave(&ioapic_lock, flags); io_apic_write(0, 0x11+2*pin, *(((int *)&entry)+1)); io_apic_write(0, 0x10+2*pin, *(((int *)&entry)+0)); - spin_unlock_irqrestore(&ioapic_lock, flags); enable_8259A_irq(0); } --- 1315,1324 /* * Add it to the IO-APIC irq-routing table: */ + spin_lock_irqsave_hw(&ioapic_lock, flags); io_apic_write(0, 0x11+2*pin, *(((int *)&entry)+1)); io_apic_write(0, 0x10+2*pin, *(((int *)&entry)+0)); + spin_unlock_irqrestore_hw(&ioapic_lock, flags); enable_8259A_irq(0); } *** *** 13,53 extern void fastcall add_preempt_count(int val); extern void fastcall sub_preempt_count(int val); #else - # define add_preempt_count(val) do { preempt_count() += (val); } while (0) - # define sub_preempt_count(val) do { preempt_count() -= (val); } while (0) #endif - #define inc_preempt_count() add_preempt_count(1) - #define dec_preempt_count() sub_preempt_count(1) - #define preempt_count() (current_thread_info()->preempt_count) #ifdef CONFIG_PREEMPT asmlinkage void preempt_schedule(void); - #define preempt_disable() \ - do { \ - inc_preempt_count(); \ - barrier(); \ } while (0) - #define preempt_enable_no_resched() \ - do { \ - barrier(); \ - dec_preempt_count(); \ } while (0) - #define preempt_check_resched() \ - do { \ - if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \ - preempt_schedule(); \ } while (0) - #define preempt_enable() \ - do { \ - preempt_enable_no_resched(); \ - preempt_check_resched(); \ } while (0) #else --- 13,70 extern void fastcall add_preempt_count(int val); extern void fastcall sub_preempt_count(int val); #else + #define add_preempt_count(val)do { preempt_count() += (val); } while (0) + #define sub_preempt_count(val)do { preempt_count() -= (val); } while (0) #endif + #define inc_preempt_count() add_preempt_count(1) + #define dec_preempt_count() sub_preempt_count(1) + #define preempt_count() (current_thread_info()->preempt_count) #ifdef CONFIG_PREEMPT asmlinkage void preempt_schedule(void); + #ifdef CONFIG_IPIPE + + #include + + extern struct ipipe_domain *ipipe_percpu_domain[], *ipipe_root_domain; + + #define ipipe_preempt_guard() (ipipe_percpu_domain[ipipe_processor_id()] == ipipe_root_domain) + #else + #define ipipe_preempt_guard() 1 + #endif + + #define preempt_disable() \ + do { \ + if (ipipe_preempt_guard()) {\ + inc_preempt_count();\ + barrier(); \ + } \ } while (0) + #define preempt_enable_no_resched() \ + do { \ + if (ipipe_preempt_guard()) {\ + barrier(); \ + dec_preempt_count();\ + } \ } while (0) + #define preempt_check_resched() \ + do { \ + if (ipipe_preempt_guard()) {\ + if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \ + preempt_schedule(); \ + } \ } while (0) + #define preempt_enable() \ + do { \ + preempt_enab
Re: [Xenomai-core] heres a go at an adeos-ipipe-2.6.15-i386-1.1-01.patch
Kent Borg wrote: Jim Cromie posted a patch attempt for 2.6.15 (yeah!), and the patch applied, but it doesn't compile for me: [...] LD init/built-in.o LD .tmp_vmlinux1 arch/i386/kernel/built-in.o: In function `__ipipe_sync_stage': : undefined reference to `ret_from_intr' arch/i386/kernel/built-in.o: In function `__ipipe_sync_stage': : undefined reference to `ret_from_intr' make: *** [.tmp_vmlinux1] Error 1 ~/linux-2.6.15$ For a .config I started with the stock Ubuntu 2.6.12-10-686 config file and then took the defaults for all the oldconfig questions. Suggestions? You get to keep both pieces ? ;-) FWIW, the kernel was still running on my soekris 4801 til just now. (I rebooted) Most of that time it was without its NFS root fs; my laptop was unconnected. It was doing *no* work of any kind tho. Not that this helps... Im trying a kernel build on my sony laptop pentium M. differnt config than yours, but fuller than the soekris. Its running now, Im typing on it. wifi card works too ! Ive attached my working config - might get you going. pls report back what made your config not work, once you find it. :: ipipe/Linux :: Priority=100, Id=0x irq0-15: accepted irq32: grabbed, virtual :: ipipe/version :: 1.1-01 FWIW, I diffed the 14 patch against mine, was puzzled at the large textual diffs. Guessed that it was a file ordering diff in the tar, and then forgot to mention this at send. This seems kinda odd, since Im running linux. Phillipe, are you running BSD ? Are you creating patches from an fs other than ext3 ? That could explain the ordering. If not, Im stumped. Maybe its an svn thing, they have a berkley-db-as-fs dont they ? hth, jimc Also, FWIW, Ive been reading LKML, and it appears that Ingo Molnar's Mutex patches have turned the corner with Linux. Theyre not in, and Ive got no crystal ball, but I suspect they will get into 17 or 16 a good writeup for the regular folks (like me) on this list is here: http://lwn.net/Articles/164380/ config.gz Description: GNU Zip compressed data ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] heres a go at an adeos-ipipe-2.6.15-i386-1.1-01.patch
Philippe Gerum wrote: You may want to try this one: http://download.gna.org/adeos/patches/v2.6/i386/adeos-ipipe-2.6.15-i386-1.1-03.patch although Im not surprised, I feel like telling someone, [EMAIL PROTECTED] ~]$ uname -a Linux harpo.jimc.earth 2.6.15-ipipe-103-sony #1 Sat Jan 7 13:54:09 MST 2006 i686 i686 i386 GNU/Linux [EMAIL PROTECTED] ~]$ is NFS root for .. soekris:~# uname -a Linux soekris 2.6.15-ipipe-103-sk #3 Sat Jan 7 13:42:06 MST 2006 i586 GNU/Linux soekris:~# soekris:~# df Filesystem 1K-blocks Used Available Use% Mounted on 192.168.42.1:/nfshost/soekris 20158372 14249292 4885080 75% / tmpfs63268 0 63268 0% /dev/shm /dev/hda1 484602268767190813 59% /mnt/flash 192.168.42.1:/boot20158400 14249312 4885088 75% /boot 192.168.42.1:/lib/modules 20158400 14249312 4885088 75% /lib/modules 192.168.42.1:/media/cdrecorder 20158400 14249312 4885088 75% /mnt/cd 192.168.42.1:/home20158400 14249312 4885088 75% /home 192.168.42.1:/mnt/dilbert 15638816 11716256 3128128 79% /mnt/dilbert 192.168.42.1:/usr/xenomai 20158400 14249312 4885088 75% /usr/xenomai 192.168.42.1:/home/jimc/dilbert/pirt 15638816 11716256 3128128 79% /mnt/pirt woohoo! I just diffed my-1.01 and real-1.03, it looks like I missed a bunch of these: > - spin_unlock_irqrestore(&ioapic_lock, flags); > + spin_unlock_irqrestore_hw(&ioapic_lock, flags); did I get lucky ? or is it cuz Im not SMP ? or cuz my sony has no APIC (as distinct from ACPI) ? do any PCs have an APIC, or is that something for servers / hi-end or embedded ? BIOS-provided physical RAM map: BIOS-e820: - 0009fc00 (usable) BIOS-e820: 0009fc00 - 000a (reserved) BIOS-e820: 000e - 0010 (reserved) BIOS-e820: 0010 - 1ff4 (usable) BIOS-e820: 1ff4 - 1ff5 (ACPI data) BIOS-e820: 1ff5 - 2000 (ACPI NVS) 511MB LOWMEM available. On node 0 totalpages: 130880 DMA zone: 4096 pages, LIFO batch:0 DMA32 zone: 0 pages, LIFO batch:0 Normal zone: 126784 pages, LIFO batch:31 HighMem zone: 0 pages, LIFO batch:0 DMI present. ACPI: RSDP (v000 SONY ) @ 0x000f53f0 ACPI: RSDT (v001 SONY F1 0x20040323 MSFT 0x0097) @ 0x1ff4 ACPI: FADT (v002 SONY F1 0x20040323 MSFT 0x0097) @ 0x1ff40200 ACPI: OEMB (v001 SONY F1 0x20040323 MSFT 0x0097) @ 0x1ff50040 ACPI: DSDT (v001 SONY F1 0x20040323 MSFT 0x010d) @ 0x ACPI: PM-Timer IO Port: 0x408 Allocating PCI resources starting at 3000 (gap: 2000:e000) Built 1 zonelists Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 2048 (order: 11, 32768 bytes) Detected 1694.791 MHz processor. Using pmtmr for high-res timesource I-pipe 1.1-03: pipeline enabled. BTW, what happened to 1.01 and 1.02 ? tia jimc ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] xenomai posix build errs
build error after selecting POSIX interface, on svn-head - ie 515 [EMAIL PROTECTED] linux-2.6.15.1-ipipe-103-sonyI]$ make CHK include/linux/version.h SPLIT include/linux/autoconf.h -> include/config/* CHK include/linux/compile.h CHK usr/initramfs_list CC [M] kernel/xenomai/skins/posix/sched.o In file included from kernel/xenomai/skins/posix/../posix/internal.h:24, from kernel/xenomai/skins/posix/../posix/thread.h:23, from kernel/xenomai/skins/posix/sched.c:19: include/xenomai/posix/posix.h:43:19: error: errno.h: No such file or directory include/xenomai/posix/posix.h:44:21: error: pthread.h: No such file or directory include/xenomai/posix/posix.h:45:19: error: sched.h: No such file or directory include/xenomai/posix/posix.h:46:20: error: signal.h: No such file or directory include/xenomai/posix/posix.h:47:23: error: semaphore.h: No such file or directory include/xenomai/posix/posix.h:48:20: error: mqueue.h: No such file or directory include/xenomai/posix/posix.h:49:18: error: time.h: No such file or directory include/xenomai/posix/posix.h:50:19: error: fcntl.h: No such file or directory include/xenomai/posix/posix.h:51:20: error: unistd.h: No such file or directory include/xenomai/posix/posix.h:52:22: error: sys/mman.h: No such file or directory include/xenomai/posix/posix.h:53:23: error: sys/ioctl.h: No such file or directory include/xenomai/posix/posix.h:54:24: error: sys/socket.h: No such file or directory In file included from kernel/xenomai/skins/posix/sched.c:19: kernel/xenomai/skins/posix/../posix/thread.h:61: error: syntax error before ‘pthread_attr_t’ kernel/xenomai/skins/posix/../posix/thread.h:61: warning: no semicolon at end of struct or union kernel/xenomai/skins/posix/../posix/thread.h:72: error: syntax error before ‘:’ token kernel/xenomai/skins/posix/../posix/thread.h:73: error: syntax error before ‘:’ token kernel/xenomai/skins/posix/../posix/thread.h:74: error: syntax error before ‘:’ token kernel/xenomai/skins/posix/../posix/thread.h:86: error: ‘PTHREAD_KEYS_MAX’ undeclared here (not in a function) kernel/xenomai/skins/posix/../posix/thread.h:94: error: syntax error before ‘}’ token kernel/xenomai/skins/posix/../posix/thread.h:139: error: syntax error before ‘thread’ kernel/xenomai/skins/posix/../posix/thread.h:139: warning: function declaration isn’t a prototype kernel/xenomai/skins/posix/../posix/thread.h: In function ‘thread_cancellation_point’: kernel/xenomai/skins/posix/../posix/thread.h:143: error: ‘pthread_t’ undeclared (first use in this function) kernel/xenomai/skins/posix/../posix/thread.h:143: error: (Each undeclared identifier is reported only once kernel/xenomai/skins/posix/../posix/thread.h:143: error: for each function it appears in.) kernel/xenomai/skins/posix/../posix/thread.h:143: error: syntax error before ‘cur’ kernel/xenomai/skins/posix/../posix/thread.h:143: error: ‘_taddr’ undeclared (first use in this function) kernel/xenomai/skins/posix/../posix/thread.h:143: error: invalid use of undefined type ‘struct pse51_thread’ kernel/xenomai/skins/posix/../posix/thread.h: At top level: kernel/xenomai/skins/posix/../posix/thread.h:143: error: syntax error before ‘)’ token kernel/xenomai/skins/posix/sched.c: In function ‘sched_get_priority_min’: kernel/xenomai/skins/posix/sched.c:28: error: ‘SCHED_OTHER’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c: In function ‘sched_get_priority_max’: kernel/xenomai/skins/posix/sched.c:46: error: ‘SCHED_OTHER’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c: At top level: kernel/xenomai/skins/posix/sched.c:72: error: syntax error before ‘tid’ kernel/xenomai/skins/posix/sched.c:74: warning: function declaration isn’t a prototype kernel/xenomai/skins/posix/sched.c: In function ‘pthread_getschedparam’: kernel/xenomai/skins/posix/sched.c:79: error: ‘tid’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:85: error: ‘pol’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:86: error: ‘par’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c: At top level: kernel/xenomai/skins/posix/sched.c:93: error: syntax error before ‘tid’ kernel/xenomai/skins/posix/sched.c:95: warning: function declaration isn’t a prototype kernel/xenomai/skins/posix/sched.c: In function ‘pthread_setschedparam’: kernel/xenomai/skins/posix/sched.c:101: error: ‘tid’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:107: error: ‘pol’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:120: error: ‘SCHED_OTHER’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:131: error: ‘par’ undeclared (first use in this function) make[4]: *** [kernel/xenomai/skins/posix/sched.o] Error 1 make[3]: *** [kernel/xenomai/skins/posix] Error 2 make[2]: *** [kernel/xenomai/skins] Error 2 make[1]: *** [kernel/xenomai] Error 2 make: *** [kernel] Error 2
[Xenomai-core] some results on my laptop
some random sucesses .. Ive been running an ipipe kernel as the default since shortly after 1/7. Since then, Ive had a couple of freezes on boot, and sometimes bash's auto-complete takes longer to complete, but other than that, things have been solid. But that kernel wasnt configured using scripts/prepare-kernel.sh, so was missing the xeno_* modules. [EMAIL PROTECTED] latency]$ sudo ./run -- -T 120 -h * * * Type ^C to stop this application. * * == Sampling period: 100 us == Test mode: periodic user-mode task warming up... RTT| 00:00:05 (periodic user-mode task, 100 us period) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| -4749| -4585| -3921| 0| -4749| -3921 RTD| -4749| -4583| -3015| 0| -4749| -3015 RTD| -4749| -4578| -2685| 0| -4749| -2685 RTD| -4750| -4581| -3015| 0| -4750| -2685 RTD| -4812| -4578| -3049| 0| -4812| -2685 RTD| -4757| -4584| -3785| 0| -4812| -2685 RTD| -4798| -4584| -2636| 0| -4812| -2636 RTD| -4757| -4582| -3029| 0| -4812| -2636 RTD| -4748| -4582| -3906| 0| -4812| -2636 RTD| -4751| -4582| -2666| 0| -4812| -2636 RTD| -4752| -4582| -2806| 0| -4812| -2636 RTD| -4748| -4581| -3157| 0| -4812| -2636 RTD| -4750| -4583| -2882| 0| -4812| -2636 RTD| -4779| -4583| -2816| 0| -4812| -2636 RTD| -4804| -4585| -2979| 0| -4812| -2636 RTD| -4747| -4583| -2684| 0| -4812| -2636 RTD| -4761| -4583| -2600| 0| -4812| -2600 RTD| -4763| -4583| -2782| 0| -4812| -2600 RTD| -4807| -4584| -2582| 0| -4812| -2582 RTD| -4787| -4582| -2721| 0| -4812| -2582 RTD| -4766| -4584| -2713| 0| -4812| -2582 RTT| 00:01:04 (periodic user-mode task, 100 us period) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| -4750| -4584| -2936| 0| -4812| -2582 RTD| -4749| -4585| -2736| 0| -4812| -2582 RTD| -4814| -4586| -2935| 0| -4814| -2582 RTD| -4781| -4584| -2884| 0| -4814| -2582 RTD| -4749| -4579| -2743| 0| -4814| -2582 RTD| -4749| -4580| -2708| 0| -4814| -2582 RTD| -4750| -4585| -2870| 0| -4814| -2582 RTD| -4748| -4584| -2761| 0| -4814| -2582 RTD| -4766| -4585| -3152| 0| -4814| -2582 RTD| -4755| -4584| -3001| 0| -4814| -2582 RTD| -4749| -4584| -2810| 0| -4814| -2582 RTD| -4748| -4584| -2935| 0| -4814| -2582 RTD| -4754| -4584| -2734| 0| -4814| -2582 RTD| -4789| -4586| -3109| 0| -4814| -2582 RTD| -4765| -4583| -2835| 0| -4814| -2582 RTD| -4751| -4584| -2767| 0| -4814| -2582 RTD| -4750| -4582| -2831| 0| -4814| -2582 RTD| -4750| -4585| -2829| 0| -4814| -2582 RTD| -4782| -4584| -2751| 0| -4814| -2582 RTD| -4750| -4584| -2851| 0| -4814| -2582 ---|--param|range-|--samples HSD|min| 4 - 5 | 41 ---|--param|range-|--samples HSD|avg| 2 - 3 | 60 HSD|avg| 3 - 4 | 429 HSD|avg| 4 - 5 | 416620 ---|--param|range-|--samples HSD|max| 2 - 3 | 30 HSD|max| 3 - 4 | 11 HSH|--param|--samples-|--average--|---stddev-- HSS|min|41| 4.000| 0.000 HSS|avg|417109| 3.999| 0.040 HSS|max|41| 2.268| 0.449 ---|||||- RTS| -4814| -4583| -2582| 0|00:02:00/00:02:00 This was run on a pentium-M laptop, with cpu-clock running at 600 MHz, (capable of 1.7 GHz) I presume this might explain the negative latancies. Im aware this is un-supported .. The only thing that looks wrong is the test-duration. I asked for 120 sec, it gave me 40 samples. The test did take 120 to run. heres
Re: [Xenomai-core] some results on my laptop
Philippe Gerum wrote: Ive been running an ipipe kernel as the default since shortly after 1/7. Since then, Ive had a couple of freezes on boot, and sometimes bash's auto-complete takes longer to complete, Eh? Maybe the CONFIG_PCI_MSI syndrom again? Um, does this tell you anything ? $ zcat /proc/config.gz | grep PCI_MSI # CONFIG_PCI_MSI is not set I noticed the slow completion when doing some heavy disk stuff, lndir on a kernel tree, and probably diff -r on 2 kernel trees, so the laptop was pretty busy. This was run on a pentium-M laptop, with cpu-clock running at 600 MHz, (capable of 1.7 GHz) I presume this might explain the negative latancies. Im aware this is un-supported .. The negative values are just there because even at 600Mhz, the timing anticipation applied by the nucleus to compensate for the intrinsic latency of the box is too high; i.e. the nucleus performs a bit too well latency-wise, so the anticipated timer ticks end up being a bit early on schedule. IOW, all is fine. Given the figures above, you could probably reduce the anticipation factor by setting the CONFIG_XENO_HW_SCHED_LATENCY (Machine menu) parameter to, say, 2500 nanoseconds (the default null value tells the nucleus to use the pre-calibrated value, which might be higher than this for your setup). Ok. with latencies == 0, calibration happens at runtime, so it would reflect the current workload (and with cpufreq on) also would reflect the current clock frequency, correct ? Btw, I'm not sure if you enabled the local APIC in your kernel config; if you did not, you should: there is no reason to keep using the braindamage 8254 PIT when a LAPIC is available with your CPU. Ok, now running this. zcat /proc/config.gz | grep APIC CONFIG_X86_GOOD_APIC=y CONFIG_X86_UP_APIC=y CONFIG_X86_UP_IOAPIC=y CONFIG_X86_LOCAL_APIC=y CONFIG_X86_IO_APIC=y [EMAIL PROTECTED] latency]$ sudo ./run -- -T60 -s -t1 Password: * * * Type ^C to stop this application. * * == Sampling period: 100 us == Test mode: in-kernel periodic task warming up... RTT| 00:00:05 (in-kernel periodic task, 100 us period) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| -2165| 127216| 1399358|7582| -2165| 1399358 RTD| -2188| -205817| 1403255| 16431| -2188| 1403255 RTD| -2178| -207605| 1399828| 25271| -2188| 1403255 RTD| -2186| -208602| 1402253| 34096| -2188| 1403255 RTD| -2163| -206835| 1399144| 42943| -2188| 1403255 RTD| -2175| -206848| 1401346| 51785| -2188| 1403255 RTD| -2184| -210270| 1396823| 60621| -2188| 1403255 RTD| -2167| -208781| 1399323| 69453| -2188| 1403255 RTD| -2169| -211121| 1397365| 78267| -2188| 1403255 RTD| -2173| -210119| 1398349| 87084| -2188| 1403255 RTD| -2168| -208425| 1400764| 95922| -2188| 1403255 RTD| -2173| 211271| 1397470| 104678| -2188| 1403255 RTD| -2170| -208130| 1399794| 113508| -2188| 1403255 RTD| -2161| -208400| 1397968| 122334| -2188| 1403255 RTD| -2162| -211225| 1397581| 131139| -2188| 1403255 RTD| -2175| -210500| 1397274| 139951| -2188| 1403255 RTD| -2179| -207530| 1396890| 148781| -2188| 1403255 RTD| -2178| -207890| 1399275| 157611| -2188| 1403255 RTD| -2172| -207750| 1397386| 166432| -2188| 1403255 RTD| -2175| -206057| 1399763| 175260| -2188| 1403255 HSH|--param|--samples-|--average--|---stddev-- HSS|min|20| 2.000| 0.000 HSS|avg|205060| 90.399| 25.538 HSS|max|20| 99.000| 0.000 ---|||||- RTS| -2188| -170670| 1403255| 175260|00:01:00/00:01:00 Obviously, the numbers dont look so good. The test duration comments still apply. You should disable the ACPI support if enabled, and especially everything related to the CPUfreq scaling and power suspend. Given that Im not really developing any RT code, and I like the laptop quiet and cool, Im inclined to keep the CPUfreq scaling on, at least for my everyday kernel. How much does CPUfreq invalidate results I might send (periodically) thanks ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] xenomai posix build errs
Gilles Chanteperdrix wrote: Jim Cromie wrote: > > build error after selecting POSIX interface, > on svn-head - ie 515 > > [EMAIL PROTECTED] linux-2.6.15.1-ipipe-103-sonyI]$ make > CHK include/linux/version.h > SPLIT include/linux/autoconf.h -> include/config/* > CHK include/linux/compile.h > CHK usr/initramfs_list > CC [M] kernel/xenomai/skins/posix/sched.o > In file included from kernel/xenomai/skins/posix/../posix/internal.h:24, > from kernel/xenomai/skins/posix/../posix/thread.h:23, > from kernel/xenomai/skins/posix/sched.c:19: > include/xenomai/posix/posix.h:43:19: error: errno.h: No such file or > directory Could you try re-running prepare-kernel.sh ? yes, that worked. thanks, Odd thing is, it caused a make to run 'make oldconfig', *and* it rejected XENO_* config items as unknown. Dunno why, but poking/rerunning it a few more times fixed things. sorry for the delay in anwering. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] tweak scripts/prepare-kernel.sh to work with O=../linux-output
hi folks, with this patch, you can run prepare-kernel.sh on a kernel output tree, at least once that tree contains the Makefile that the script looks for. Index: scripts/prepare-kernel.sh === --- scripts/prepare-kernel.sh (revision 550) +++ scripts/prepare-kernel.sh (working copy) @@ -74,13 +74,14 @@ done linux_tree=`cd $linux_tree && pwd` +linux_out=$linux_tree if test \! -r $linux_tree/Makefile; then echo "$me: $linux_tree is not a valid Linux kernel tree" exit 2 fi -# Infere the default architecture if unspecified. +# Infer the default architecture if unspecified. if test x$linux_arch = x; then build_arch=`$xenomai_root/config/config.guess` @@ -144,6 +145,12 @@ linux_arch=blackfin fi +foo=`grep '^KERNELSRC:= ' $linux_tree/Makefile | cut -d= -f2` +if [ ! -z $foo ] ; then +linux_tree=$foo +fi +unset foo + eval linux_`grep '^EXTRAVERSION =' $linux_tree/Makefile | sed -e 's, ,,g'` eval linux_`grep '^PATCHLEVEL =' $linux_tree/Makefile | sed -e 's, ,,g'` eval linux_`grep '^SUBLEVEL =' $linux_tree/Makefile | sed -e 's, ,,g'` ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] Re: [Xenomai-help] Geode performance (Was: problems solved)
Jim Cromie wrote: attached is an update to xeno-test which runs latency with -t0 -t1 -t2 I think Ive got xeno-test and xeno-test.in sync'd, that doesnt seem to happen as part of the configure make process for me. this ones slightly improved, it greps XENO out of /proc/config.gz if its there. Fri Feb 24 20:39:14 MST 2006 running: zgrep XENO /proc/config.gz CONFIG_XENOMAI=y CONFIG_XENO_OPT_NUCLEUS=y CONFIG_XENO_OPT_PERVASIVE=y CONFIG_XENO_OPT_PIPE=y CONFIG_XENO_OPT_PIPE_NRDEV=32 CONFIG_XENO_OPT_SYS_HEAPSZ=128 # CONFIG_XENO_OPT_ISHIELD is not set CONFIG_XENO_OPT_STATS=y # CONFIG_XENO_OPT_DEBUG is not set # CONFIG_XENO_OPT_WATCHDOG is not set CONFIG_XENO_OPT_TIMING_PERIODIC=y CONFIG_XENO_OPT_TIMING_PERIOD=0 CONFIG_XENO_OPT_TIMING_TIMERLAT=0 CONFIG_XENO_OPT_TIMING_SCHEDLAT=0 # CONFIG_XENO_OPT_SCALABLE_SCHED is not set CONFIG_XENO_HW_FPU=y # CONFIG_XENO_HW_SMI_DETECT_DISABLE is not set CONFIG_XENO_HW_SMI_DETECT=y # CONFIG_XENO_HW_SMI_WORKAROUND is not set CONFIG_XENO_SKIN_NATIVE=y CONFIG_XENO_OPT_NATIVE_REGISTRY=y CONFIG_XENO_OPT_NATIVE_REGISTRY_NRSLOTS=512 CONFIG_XENO_OPT_NATIVE_PIPE=y CONFIG_XENO_OPT_NATIVE_PIPE_BUFSZ=4096 CONFIG_XENO_OPT_NATIVE_SEM=y CONFIG_XENO_OPT_NATIVE_EVENT=y CONFIG_XENO_OPT_NATIVE_MUTEX=y CONFIG_XENO_OPT_NATIVE_COND=y CONFIG_XENO_OPT_NATIVE_QUEUE=y CONFIG_XENO_OPT_NATIVE_HEAP=y CONFIG_XENO_OPT_NATIVE_ALARM=y CONFIG_XENO_OPT_NATIVE_MPS=y CONFIG_XENO_OPT_NATIVE_INTR=y CONFIG_XENO_SKIN_POSIX=m # CONFIG_XENO_SKIN_PSOS is not set # CONFIG_XENO_SKIN_UITRON is not set # CONFIG_XENO_SKIN_VRTX is not set # CONFIG_XENO_SKIN_VXWORKS is not set # CONFIG_XENO_SKIN_RTAI is not set CONFIG_XENO_SKIN_RTDM=m # CONFIG_XENO_SKIN_UVM is not set CONFIG_XENO_DRIVERS_16550A=m CONFIG_XENO_DRIVERS_TIMERBENCH=m Fri Feb 24 20:39:14 MST 2006 running: cat /proc/ipipe/Linux Priority=100, Id=0x irq0-15: accepted irq32-33: grabbed, virtual irq34: passed, virtual Fri Feb 24 20:39:14 MST 2006 Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 591) +++ scripts/xeno-test.in(working copy) @@ -3,7 +3,7 @@ myusage() { cat >&1 < spawn N workloads (dd if=/dev/zero of=/dev/null) default=1 -d used as alternate src in workload (dd if=$device ..) The device must be mounted, and (unfortunately) cannot @@ -79,6 +79,8 @@ # static info, show once loudly cat /proc/cpuinfo | egrep -v 'bug|wp' loudly cat /proc/meminfo +[ -f /proc/config.gz ] && loudly zgrep XENO /proc/config.gz + [ -d /proc/adeos ] && for f in /proc/adeos/*; do loudly cat $f; done [ -d /proc/ipipe ] && for f in /proc/ipipe/*; do loudly cat $f; done } @@ -101,14 +103,11 @@ boxstatus ( cd ../testsuite/latency - #loudly ./run -- -T 10 -s -l 5 - loudly ./run -- -h $opts - [ -n "$prepost" ] && loudly $prepost - - cd ../klatency - #loudly ./run -- -T 10 -s -l 5 - loudly ./run -- -h $opts; + loudly ./run -- $opts -t0 + loudly ./run -- $opts -t1 + loudly ./run -- $opts -t2 + ) boxstatus } @@ -129,7 +128,7 @@ workload=1 # default = 1 job # *pass get all legit options, except -N, -L -pass= # pass thru to latency, klatency +pass= # pass thru to latency loadpass= # pass thru to subshell, not to actual tests # if both empty means no logging @@ -209,7 +208,7 @@ previous results. 3. added -p 'command', which runs command before, between, and after - the latency and klatency tests. + the latency tests. TODO: ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] 16550 compile err: ‘RTDM_IRQ_ENABLE’ und eclared (first use in this function)
LD drivers/xenomai/16550A/built-in.o CC [M] drivers/xenomai/16550A/16550A.o /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c: In function ‘rt_16550_interrupt’: /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269: error: ‘RTDM_IRQ_ENABLE’ undeclared (first use in this function) /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269: error: (Each undeclared identifier is reported only once /mnt/dilbert/jimc/dilbert/lxbuild/linux-2.6.15.1-ipipe-121/drivers/xenomai/16550A/16550A.c:269: error: for each function it appears in.) make[4]: *** [drivers/xenomai/16550A/16550A.o] Error 1 make[3]: *** [drivers/xenomai/16550A] Error 2 make[2]: *** [drivers/xenomai] Error 2 make[1]: *** [drivers] Error 2 make: *** [_all] Error 2 I de-configured 16550, it built fine, so I suspect some recent change missed this item. that said, I havent tried _NOENABLE, since im guessing blind. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] RTDM and Timer functions
Rodrigo Rosenfeld Rosas wrote: Actually, I noted a minor typo error in the documentation: " of the this service" should be " of this service" this is the kind of thing that youre encouraged to create a patch for. Same for other doc nits - youre offering 'something-like' suggestions, when you could almost as easily prep an actual patch with your best wording. Also please start trimming much more agressively. None of us need to see 3 or 4 xenomai footers, and more yahoo ones besides. thanks ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] adding xenomai MLs to Gmane
Jeff Webb wrote: Jim wrote: Philippe, any objection to my requesting that Gmane.org add the xenomai MLs to their site ? This list has already been added. I found the archive at gmane the other day. In fact, your message has already been archived there! so it has. I just had to refresh the nntp list (tho thats not the search interface). thanks ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] some results on my laptop
some random sucesses .. Ive been running an ipipe kernel as the default since shortly after 1/7. Since then, Ive had a couple of freezes on boot, and sometimes bash's auto-complete takes longer to complete, but other than that, things have been solid. But that kernel wasnt configured using scripts/prepare-kernel.sh, so was missing the xeno_* modules. [EMAIL PROTECTED] latency]$ sudo ./run -- -T 120 -h * * * Type ^C to stop this application. * * == Sampling period: 100 us == Test mode: periodic user-mode task warming up... RTT| 00:00:05 (periodic user-mode task, 100 us period) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| -4749| -4585| -3921| 0| -4749| -3921 RTD| -4749| -4583| -3015| 0| -4749| -3015 RTD| -4749| -4578| -2685| 0| -4749| -2685 RTD| -4750| -4581| -3015| 0| -4750| -2685 RTD| -4812| -4578| -3049| 0| -4812| -2685 RTD| -4757| -4584| -3785| 0| -4812| -2685 RTD| -4798| -4584| -2636| 0| -4812| -2636 RTD| -4757| -4582| -3029| 0| -4812| -2636 RTD| -4748| -4582| -3906| 0| -4812| -2636 RTD| -4751| -4582| -2666| 0| -4812| -2636 RTD| -4752| -4582| -2806| 0| -4812| -2636 RTD| -4748| -4581| -3157| 0| -4812| -2636 RTD| -4750| -4583| -2882| 0| -4812| -2636 RTD| -4779| -4583| -2816| 0| -4812| -2636 RTD| -4804| -4585| -2979| 0| -4812| -2636 RTD| -4747| -4583| -2684| 0| -4812| -2636 RTD| -4761| -4583| -2600| 0| -4812| -2600 RTD| -4763| -4583| -2782| 0| -4812| -2600 RTD| -4807| -4584| -2582| 0| -4812| -2582 RTD| -4787| -4582| -2721| 0| -4812| -2582 RTD| -4766| -4584| -2713| 0| -4812| -2582 RTT| 00:01:04 (periodic user-mode task, 100 us period) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| -4750| -4584| -2936| 0| -4812| -2582 RTD| -4749| -4585| -2736| 0| -4812| -2582 RTD| -4814| -4586| -2935| 0| -4814| -2582 RTD| -4781| -4584| -2884| 0| -4814| -2582 RTD| -4749| -4579| -2743| 0| -4814| -2582 RTD| -4749| -4580| -2708| 0| -4814| -2582 RTD| -4750| -4585| -2870| 0| -4814| -2582 RTD| -4748| -4584| -2761| 0| -4814| -2582 RTD| -4766| -4585| -3152| 0| -4814| -2582 RTD| -4755| -4584| -3001| 0| -4814| -2582 RTD| -4749| -4584| -2810| 0| -4814| -2582 RTD| -4748| -4584| -2935| 0| -4814| -2582 RTD| -4754| -4584| -2734| 0| -4814| -2582 RTD| -4789| -4586| -3109| 0| -4814| -2582 RTD| -4765| -4583| -2835| 0| -4814| -2582 RTD| -4751| -4584| -2767| 0| -4814| -2582 RTD| -4750| -4582| -2831| 0| -4814| -2582 RTD| -4750| -4585| -2829| 0| -4814| -2582 RTD| -4782| -4584| -2751| 0| -4814| -2582 RTD| -4750| -4584| -2851| 0| -4814| -2582 ---|--param|range-|--samples HSD|min| 4 - 5 | 41 ---|--param|range-|--samples HSD|avg| 2 - 3 | 60 HSD|avg| 3 - 4 | 429 HSD|avg| 4 - 5 | 416620 ---|--param|range-|--samples HSD|max| 2 - 3 | 30 HSD|max| 3 - 4 | 11 HSH|--param|--samples-|--average--|---stddev-- HSS|min|41| 4.000| 0.000 HSS|avg|417109| 3.999| 0.040 HSS|max|41| 2.268| 0.449 ---|||||- RTS| -4814| -4583| -2582| 0|00:02:00/00:02:00 This was run on a pentium-M laptop, with cpu-clock running at 600 MHz, (capable of 1.7 GHz) I presume this might explain the negative latancies. Im aware this is un-supported .. The only thing that looks wrong is the test-duration. I asked for 120 sec, it gave me 40 samples. The test did take 120 to run. heres
[Xenomai-core] xenomai posix build errs
build error after selecting POSIX interface, on svn-head - ie 515 [EMAIL PROTECTED] linux-2.6.15.1-ipipe-103-sonyI]$ make CHK include/linux/version.h SPLIT include/linux/autoconf.h -> include/config/* CHK include/linux/compile.h CHK usr/initramfs_list CC [M] kernel/xenomai/skins/posix/sched.o In file included from kernel/xenomai/skins/posix/../posix/internal.h:24, from kernel/xenomai/skins/posix/../posix/thread.h:23, from kernel/xenomai/skins/posix/sched.c:19: include/xenomai/posix/posix.h:43:19: error: errno.h: No such file or directory include/xenomai/posix/posix.h:44:21: error: pthread.h: No such file or directory include/xenomai/posix/posix.h:45:19: error: sched.h: No such file or directory include/xenomai/posix/posix.h:46:20: error: signal.h: No such file or directory include/xenomai/posix/posix.h:47:23: error: semaphore.h: No such file or directory include/xenomai/posix/posix.h:48:20: error: mqueue.h: No such file or directory include/xenomai/posix/posix.h:49:18: error: time.h: No such file or directory include/xenomai/posix/posix.h:50:19: error: fcntl.h: No such file or directory include/xenomai/posix/posix.h:51:20: error: unistd.h: No such file or directory include/xenomai/posix/posix.h:52:22: error: sys/mman.h: No such file or directory include/xenomai/posix/posix.h:53:23: error: sys/ioctl.h: No such file or directory include/xenomai/posix/posix.h:54:24: error: sys/socket.h: No such file or directory In file included from kernel/xenomai/skins/posix/sched.c:19: kernel/xenomai/skins/posix/../posix/thread.h:61: error: syntax error before ‘pthread_attr_t’ kernel/xenomai/skins/posix/../posix/thread.h:61: warning: no semicolon at end of struct or union kernel/xenomai/skins/posix/../posix/thread.h:72: error: syntax error before ‘:’ token kernel/xenomai/skins/posix/../posix/thread.h:73: error: syntax error before ‘:’ token kernel/xenomai/skins/posix/../posix/thread.h:74: error: syntax error before ‘:’ token kernel/xenomai/skins/posix/../posix/thread.h:86: error: ‘PTHREAD_KEYS_MAX’ undeclared here (not in a function) kernel/xenomai/skins/posix/../posix/thread.h:94: error: syntax error before ‘}’ token kernel/xenomai/skins/posix/../posix/thread.h:139: error: syntax error before ‘thread’ kernel/xenomai/skins/posix/../posix/thread.h:139: warning: function declaration isn’t a prototype kernel/xenomai/skins/posix/../posix/thread.h: In function ‘thread_cancellation_point’: kernel/xenomai/skins/posix/../posix/thread.h:143: error: ‘pthread_t’ undeclared (first use in this function) kernel/xenomai/skins/posix/../posix/thread.h:143: error: (Each undeclared identifier is reported only once kernel/xenomai/skins/posix/../posix/thread.h:143: error: for each function it appears in.) kernel/xenomai/skins/posix/../posix/thread.h:143: error: syntax error before ‘cur’ kernel/xenomai/skins/posix/../posix/thread.h:143: error: ‘_taddr’ undeclared (first use in this function) kernel/xenomai/skins/posix/../posix/thread.h:143: error: invalid use of undefined type ‘struct pse51_thread’ kernel/xenomai/skins/posix/../posix/thread.h: At top level: kernel/xenomai/skins/posix/../posix/thread.h:143: error: syntax error before ‘)’ token kernel/xenomai/skins/posix/sched.c: In function ‘sched_get_priority_min’: kernel/xenomai/skins/posix/sched.c:28: error: ‘SCHED_OTHER’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c: In function ‘sched_get_priority_max’: kernel/xenomai/skins/posix/sched.c:46: error: ‘SCHED_OTHER’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c: At top level: kernel/xenomai/skins/posix/sched.c:72: error: syntax error before ‘tid’ kernel/xenomai/skins/posix/sched.c:74: warning: function declaration isn’t a prototype kernel/xenomai/skins/posix/sched.c: In function ‘pthread_getschedparam’: kernel/xenomai/skins/posix/sched.c:79: error: ‘tid’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:85: error: ‘pol’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:86: error: ‘par’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c: At top level: kernel/xenomai/skins/posix/sched.c:93: error: syntax error before ‘tid’ kernel/xenomai/skins/posix/sched.c:95: warning: function declaration isn’t a prototype kernel/xenomai/skins/posix/sched.c: In function ‘pthread_setschedparam’: kernel/xenomai/skins/posix/sched.c:101: error: ‘tid’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:107: error: ‘pol’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:120: error: ‘SCHED_OTHER’ undeclared (first use in this function) kernel/xenomai/skins/posix/sched.c:131: error: ‘par’ undeclared (first use in this function) make[4]: *** [kernel/xenomai/skins/posix/sched.o] Error 1 make[3]: *** [kernel/xenomai/skins/posix] Error 2 make[2]: *** [kernel/xenomai/skins] Error 2 make[1]: *** [kernel/xenomai] Error 2 make: *** [kernel] Error 2
Re: [Xenomai-core] some results on my laptop
Philippe Gerum wrote: Ive been running an ipipe kernel as the default since shortly after 1/7. Since then, Ive had a couple of freezes on boot, and sometimes bash's auto-complete takes longer to complete, Eh? Maybe the CONFIG_PCI_MSI syndrom again? Um, does this tell you anything ? $ zcat /proc/config.gz | grep PCI_MSI # CONFIG_PCI_MSI is not set I noticed the slow completion when doing some heavy disk stuff, lndir on a kernel tree, and probably diff -r on 2 kernel trees, so the laptop was pretty busy. This was run on a pentium-M laptop, with cpu-clock running at 600 MHz, (capable of 1.7 GHz) I presume this might explain the negative latancies. Im aware this is un-supported .. The negative values are just there because even at 600Mhz, the timing anticipation applied by the nucleus to compensate for the intrinsic latency of the box is too high; i.e. the nucleus performs a bit too well latency-wise, so the anticipated timer ticks end up being a bit early on schedule. IOW, all is fine. Given the figures above, you could probably reduce the anticipation factor by setting the CONFIG_XENO_HW_SCHED_LATENCY (Machine menu) parameter to, say, 2500 nanoseconds (the default null value tells the nucleus to use the pre-calibrated value, which might be higher than this for your setup). Ok. with latencies == 0, calibration happens at runtime, so it would reflect the current workload (and with cpufreq on) also would reflect the current clock frequency, correct ? Btw, I'm not sure if you enabled the local APIC in your kernel config; if you did not, you should: there is no reason to keep using the braindamage 8254 PIT when a LAPIC is available with your CPU. Ok, now running this. zcat /proc/config.gz | grep APIC CONFIG_X86_GOOD_APIC=y CONFIG_X86_UP_APIC=y CONFIG_X86_UP_IOAPIC=y CONFIG_X86_LOCAL_APIC=y CONFIG_X86_IO_APIC=y [EMAIL PROTECTED] latency]$ sudo ./run -- -T60 -s -t1 Password: * * * Type ^C to stop this application. * * == Sampling period: 100 us == Test mode: in-kernel periodic task warming up... RTT| 00:00:05 (in-kernel periodic task, 100 us period) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| -2165| 127216| 1399358|7582| -2165| 1399358 RTD| -2188| -205817| 1403255| 16431| -2188| 1403255 RTD| -2178| -207605| 1399828| 25271| -2188| 1403255 RTD| -2186| -208602| 1402253| 34096| -2188| 1403255 RTD| -2163| -206835| 1399144| 42943| -2188| 1403255 RTD| -2175| -206848| 1401346| 51785| -2188| 1403255 RTD| -2184| -210270| 1396823| 60621| -2188| 1403255 RTD| -2167| -208781| 1399323| 69453| -2188| 1403255 RTD| -2169| -211121| 1397365| 78267| -2188| 1403255 RTD| -2173| -210119| 1398349| 87084| -2188| 1403255 RTD| -2168| -208425| 1400764| 95922| -2188| 1403255 RTD| -2173| 211271| 1397470| 104678| -2188| 1403255 RTD| -2170| -208130| 1399794| 113508| -2188| 1403255 RTD| -2161| -208400| 1397968| 122334| -2188| 1403255 RTD| -2162| -211225| 1397581| 131139| -2188| 1403255 RTD| -2175| -210500| 1397274| 139951| -2188| 1403255 RTD| -2179| -207530| 1396890| 148781| -2188| 1403255 RTD| -2178| -207890| 1399275| 157611| -2188| 1403255 RTD| -2172| -207750| 1397386| 166432| -2188| 1403255 RTD| -2175| -206057| 1399763| 175260| -2188| 1403255 HSH|--param|--samples-|--average--|---stddev-- HSS|min|20| 2.000| 0.000 HSS|avg|205060| 90.399| 25.538 HSS|max|20| 99.000| 0.000 ---|||||- RTS| -2188| -170670| 1403255| 175260|00:01:00/00:01:00 Obviously, the numbers dont look so good. The test duration comments still apply. You should disable the ACPI support if enabled, and especially everything related to the CPUfreq scaling and power suspend. Given that Im not really developing any RT code, and I like the laptop quiet and cool, Im inclined to keep the CPUfreq scaling on, at least for my everyday kernel. How much does CPUfreq invalidate results I might send (periodically) thanks
Re: [Xenomai-core] xenomai posix build errs
Gilles Chanteperdrix wrote: Jim Cromie wrote: > > build error after selecting POSIX interface, > on svn-head - ie 515 > > [EMAIL PROTECTED] linux-2.6.15.1-ipipe-103-sonyI]$ make > CHK include/linux/version.h > SPLIT include/linux/autoconf.h -> include/config/* > CHK include/linux/compile.h > CHK usr/initramfs_list > CC [M] kernel/xenomai/skins/posix/sched.o > In file included from kernel/xenomai/skins/posix/../posix/internal.h:24, > from kernel/xenomai/skins/posix/../posix/thread.h:23, > from kernel/xenomai/skins/posix/sched.c:19: > include/xenomai/posix/posix.h:43:19: error: errno.h: No such file or > directory Could you try re-running prepare-kernel.sh ? yes, that worked. thanks, Odd thing is, it caused a make to run 'make oldconfig', *and* it rejected XENO_* config items as unknown. Dunno why, but poking/rerunning it a few more times fixed things. sorry for the delay in anwering.
[Xenomai-core] [patch] tweak scripts/prepare-kernel.sh to work with O=../linux-output
hi folks, with this patch, you can run prepare-kernel.sh on a kernel output tree, at least once that tree contains the Makefile that the script looks for. Index: scripts/prepare-kernel.sh === --- scripts/prepare-kernel.sh (revision 550) +++ scripts/prepare-kernel.sh (working copy) @@ -74,13 +74,14 @@ done linux_tree=`cd $linux_tree && pwd` +linux_out=$linux_tree if test \! -r $linux_tree/Makefile; then echo "$me: $linux_tree is not a valid Linux kernel tree" exit 2 fi -# Infere the default architecture if unspecified. +# Infer the default architecture if unspecified. if test x$linux_arch = x; then build_arch=`$xenomai_root/config/config.guess` @@ -144,6 +145,12 @@ linux_arch=blackfin fi +foo=`grep '^KERNELSRC:= ' $linux_tree/Makefile | cut -d= -f2` +if [ ! -z $foo ] ; then +linux_tree=$foo +fi +unset foo + eval linux_`grep '^EXTRAVERSION =' $linux_tree/Makefile | sed -e 's, ,,g'` eval linux_`grep '^PATCHLEVEL =' $linux_tree/Makefile | sed -e 's, ,,g'` eval linux_`grep '^SUBLEVEL =' $linux_tree/Makefile | sed -e 's, ,,g'`
[Xenomai-core] heres a go at an adeos-ipipe-2.6.15-i386-1.1-01.patch
hi Phillipe, everyone, happy 06 ! Out of curiosity, I applied adeos-ipipe-2.6.14-i386-1.1-01.patch on top of 15. the rejects were small, and simple enough looking, that even a lazy sod like myself might manually fix them, so I did. whats more, it built clean and booted ! I havent done anything more demanding than ls, df, etc, but hey, low hanging fruit tastes just as good / even better ;-) So heres hoping that you've not started this particular thankless task, and Ive saved your cycles for something more dependent on your particular talents. enjoy. jimc diff.try-15-ipipe-101.20060104.170829.bz2 Description: application/bzip ./arch/i386/kernel/io_apic.c.rej ./include/linux/preempt.h.rej ./init/main.c.rej ./kernel/irq/handle.c.rej ./kernel/Makefile.rej *** *** 1313,1322 /* * Add it to the IO-APIC irq-routing table: */ - spin_lock_irqsave(&ioapic_lock, flags); io_apic_write(0, 0x11+2*pin, *(((int *)&entry)+1)); io_apic_write(0, 0x10+2*pin, *(((int *)&entry)+0)); - spin_unlock_irqrestore(&ioapic_lock, flags); enable_8259A_irq(0); } --- 1315,1324 /* * Add it to the IO-APIC irq-routing table: */ + spin_lock_irqsave_hw(&ioapic_lock, flags); io_apic_write(0, 0x11+2*pin, *(((int *)&entry)+1)); io_apic_write(0, 0x10+2*pin, *(((int *)&entry)+0)); + spin_unlock_irqrestore_hw(&ioapic_lock, flags); enable_8259A_irq(0); } *** *** 13,53 extern void fastcall add_preempt_count(int val); extern void fastcall sub_preempt_count(int val); #else - # define add_preempt_count(val) do { preempt_count() += (val); } while (0) - # define sub_preempt_count(val) do { preempt_count() -= (val); } while (0) #endif - #define inc_preempt_count() add_preempt_count(1) - #define dec_preempt_count() sub_preempt_count(1) - #define preempt_count() (current_thread_info()->preempt_count) #ifdef CONFIG_PREEMPT asmlinkage void preempt_schedule(void); - #define preempt_disable() \ - do { \ - inc_preempt_count(); \ - barrier(); \ } while (0) - #define preempt_enable_no_resched() \ - do { \ - barrier(); \ - dec_preempt_count(); \ } while (0) - #define preempt_check_resched() \ - do { \ - if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \ - preempt_schedule(); \ } while (0) - #define preempt_enable() \ - do { \ - preempt_enable_no_resched(); \ - preempt_check_resched(); \ } while (0) #else --- 13,70 extern void fastcall add_preempt_count(int val); extern void fastcall sub_preempt_count(int val); #else + #define add_preempt_count(val)do { preempt_count() += (val); } while (0) + #define sub_preempt_count(val)do { preempt_count() -= (val); } while (0) #endif + #define inc_preempt_count() add_preempt_count(1) + #define dec_preempt_count() sub_preempt_count(1) + #define preempt_count() (current_thread_info()->preempt_count) #ifdef CONFIG_PREEMPT asmlinkage void preempt_schedule(void); + #ifdef CONFIG_IPIPE + + #include + + extern struct ipipe_domain *ipipe_percpu_domain[], *ipipe_root_domain; + + #define ipipe_preempt_guard() (ipipe_percpu_domain[ipipe_processor_id()] == ipipe_root_domain) + #else + #define ipipe_preempt_guard() 1 + #endif + + #define preempt_disable() \ + do { \ + if (ipipe_preempt_guard()) {\ + inc_preempt_count();\ + barrier(); \ + } \ } while (0) + #define preempt_enable_no_resched() \ + do { \ + if (ipipe_preempt_guard()) {\ + barrier(); \ + dec_preempt_count();\ + } \ } while (0) + #define preempt_check_resched() \ + do { \ + if (ipipe_preempt_guard()) {\ + if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \ + preempt_schedule(); \ + } \ } while (0) + #define preempt_enable() \ + do { \ + preempt_enab
Re: [Xenomai-core] heres a go at an adeos-ipipe-2.6.15-i386-1.1-01.patch
Kent Borg wrote: Jim Cromie posted a patch attempt for 2.6.15 (yeah!), and the patch applied, but it doesn't compile for me: [...] LD init/built-in.o LD .tmp_vmlinux1 arch/i386/kernel/built-in.o: In function `__ipipe_sync_stage': : undefined reference to `ret_from_intr' arch/i386/kernel/built-in.o: In function `__ipipe_sync_stage': : undefined reference to `ret_from_intr' make: *** [.tmp_vmlinux1] Error 1 ~/linux-2.6.15$ For a .config I started with the stock Ubuntu 2.6.12-10-686 config file and then took the defaults for all the oldconfig questions. Suggestions? You get to keep both pieces ? ;-) FWIW, the kernel was still running on my soekris 4801 til just now. (I rebooted) Most of that time it was without its NFS root fs; my laptop was unconnected. It was doing *no* work of any kind tho. Not that this helps... Im trying a kernel build on my sony laptop pentium M. differnt config than yours, but fuller than the soekris. Its running now, Im typing on it. wifi card works too ! Ive attached my working config - might get you going. pls report back what made your config not work, once you find it. :: ipipe/Linux :: Priority=100, Id=0x irq0-15: accepted irq32: grabbed, virtual :: ipipe/version :: 1.1-01 FWIW, I diffed the 14 patch against mine, was puzzled at the large textual diffs. Guessed that it was a file ordering diff in the tar, and then forgot to mention this at send. This seems kinda odd, since Im running linux. Phillipe, are you running BSD ? Are you creating patches from an fs other than ext3 ? That could explain the ordering. If not, Im stumped. Maybe its an svn thing, they have a berkley-db-as-fs dont they ? hth, jimc Also, FWIW, Ive been reading LKML, and it appears that Ingo Molnar's Mutex patches have turned the corner with Linux. Theyre not in, and Ive got no crystal ball, but I suspect they will get into 17 or 16 a good writeup for the regular folks (like me) on this list is here: http://lwn.net/Articles/164380/ config.gz Description: GNU Zip compressed data
Re: [Xenomai-core] heres a go at an adeos-ipipe-2.6.15-i386-1.1-01.patch
Philippe Gerum wrote: You may want to try this one: http://download.gna.org/adeos/patches/v2.6/i386/adeos-ipipe-2.6.15-i386-1.1-03.patch although Im not surprised, I feel like telling someone, [EMAIL PROTECTED] ~]$ uname -a Linux harpo.jimc.earth 2.6.15-ipipe-103-sony #1 Sat Jan 7 13:54:09 MST 2006 i686 i686 i386 GNU/Linux [EMAIL PROTECTED] ~]$ is NFS root for .. soekris:~# uname -a Linux soekris 2.6.15-ipipe-103-sk #3 Sat Jan 7 13:42:06 MST 2006 i586 GNU/Linux soekris:~# soekris:~# df Filesystem 1K-blocks Used Available Use% Mounted on 192.168.42.1:/nfshost/soekris 20158372 14249292 4885080 75% / tmpfs63268 0 63268 0% /dev/shm /dev/hda1 484602268767190813 59% /mnt/flash 192.168.42.1:/boot20158400 14249312 4885088 75% /boot 192.168.42.1:/lib/modules 20158400 14249312 4885088 75% /lib/modules 192.168.42.1:/media/cdrecorder 20158400 14249312 4885088 75% /mnt/cd 192.168.42.1:/home20158400 14249312 4885088 75% /home 192.168.42.1:/mnt/dilbert 15638816 11716256 3128128 79% /mnt/dilbert 192.168.42.1:/usr/xenomai 20158400 14249312 4885088 75% /usr/xenomai 192.168.42.1:/home/jimc/dilbert/pirt 15638816 11716256 3128128 79% /mnt/pirt woohoo! I just diffed my-1.01 and real-1.03, it looks like I missed a bunch of these: > - spin_unlock_irqrestore(&ioapic_lock, flags); > + spin_unlock_irqrestore_hw(&ioapic_lock, flags); did I get lucky ? or is it cuz Im not SMP ? or cuz my sony has no APIC (as distinct from ACPI) ? do any PCs have an APIC, or is that something for servers / hi-end or embedded ? BIOS-provided physical RAM map: BIOS-e820: - 0009fc00 (usable) BIOS-e820: 0009fc00 - 000a (reserved) BIOS-e820: 000e - 0010 (reserved) BIOS-e820: 0010 - 1ff4 (usable) BIOS-e820: 1ff4 - 1ff5 (ACPI data) BIOS-e820: 1ff5 - 2000 (ACPI NVS) 511MB LOWMEM available. On node 0 totalpages: 130880 DMA zone: 4096 pages, LIFO batch:0 DMA32 zone: 0 pages, LIFO batch:0 Normal zone: 126784 pages, LIFO batch:31 HighMem zone: 0 pages, LIFO batch:0 DMI present. ACPI: RSDP (v000 SONY ) @ 0x000f53f0 ACPI: RSDT (v001 SONY F1 0x20040323 MSFT 0x0097) @ 0x1ff4 ACPI: FADT (v002 SONY F1 0x20040323 MSFT 0x0097) @ 0x1ff40200 ACPI: OEMB (v001 SONY F1 0x20040323 MSFT 0x0097) @ 0x1ff50040 ACPI: DSDT (v001 SONY F1 0x20040323 MSFT 0x010d) @ 0x ACPI: PM-Timer IO Port: 0x408 Allocating PCI resources starting at 3000 (gap: 2000:e000) Built 1 zonelists Kernel command line: ro root=LABEL=/ Initializing CPU#0 PID hash table entries: 2048 (order: 11, 32768 bytes) Detected 1694.791 MHz processor. Using pmtmr for high-res timesource I-pipe 1.1-03: pipeline enabled. BTW, what happened to 1.01 and 1.02 ? tia jimc
[Xenomai-core] Re: Benchmarking Plan
Philippe Gerum wrote: This is a partial roadmap for the project, composed of the currently Ah! I just _knew_ you would jump in as expected. The teasing worked :o) well done ! Its the mark of a great leader to get folks to do what he wants, while making them think its their idea ;-) (and I imagine thats why you ccd Takis too :-) [lots of snippage, thruout] LiveCD has a few weaknesses though: - cant test platforms w/o cdrom I also think that's a serious issue. Aside of the hw availability problem (e.g. non-x86 eval boards), having to burn the CD is one step too many when time is a scarce resource. It often prevents to run it as a fast check procedure even in the absence of any noticeable problem. IOW, you won't burn a CD to run the tests unless you are really stuck with some issue. So a significant part of the interest of having a generic testsuite is lost: you just don't discover potential problems before the serious breakage is already in the wild. One thing that would help expand LiveCD's usefullness is to be able to : - mount pirt.iso in loopback on a host (my laptop), - export it via NFS to box-under-test, - use pxelinux to feed LiveCD's kernel(s?) to box when it boots. I tried to do this, and IIRC ran into trouble with absolute symlinks from /etc.ro to /etc. The absoluteness fouls things when the ISO is mounted on forex: /media/cd. I poked a bit at trying to convince NFS to resolve them as if they were used within a chroot jail, but I dont know enough about that. - manual re-entry of data is tedious, - no collection of platform data (available for automation) - spotty info about cpu, memory, mobo, etc which is largely user-supplied, so it can be wrong. - no unattended test (still true?) - unfiltered preposterous data. Sometimes, data sent are just rubbish because of well-known hw-related dysfunctioning or misuse of the LiveCD. This perturbates the results uselessly. Any ideas on how to reject these outliers ? (defer til we have statistical analysis in place ?) - difficulties so far to really get a sensible digested information out of the zillions of results, aside of very general figures (e.g. best performer). But this is more an issue of lack of data post-processors than of the LiveCD infrastructure itself. yep. And we *need* platform data to start to categorize them by platform, important config choices, etc. We should see narrower ranges of results, and be more able to reject the junk. Additionally, LiveCD is a really great tool when it comes to help people figuring out whether their respective box or brain have a problem with the tested software, i.e. by automatically providing a sane software (kernel+rtos) configuration and the proper way to run it quite easily, a number of people could determine if their current lack of luck comes from their software configuration, or rather from a more serious problem. yeah. pre-built world saves a lot of early thrashing. - testsuite/cruncher ? The cruncher measures the impact of using the interrupt shield, but this setting is now configured out by default since a majority of people don't currently need it. Shield cost/performances are still useful to know though. OK. adding 1 call to cruncher is simple. Over time we *may* collect enough data to make some A (shields up!) vs B (shields down!) comparisons. But I dont see the data to distinguish A, B - dont we need the xeno/ipipe equivalent of /proc/config.gz to do this ? wrt testsuite/README cruncher notes, is this useful info ? (manual insmods here...) soekris:/usr/realtime/2.6.14-ski9-v1/testsuite/cruncher# cruncher Calibrating cruncher...11773, done -- ideal computation time = 10023 us. 1000 samples, 1000 hz freq (pid=4183, policy=SCHED_FIFO, prio=99) Nanosleep jitter: min = 60 us, max = 192 us, avg = 77 us Execution jitter: min = 39 us (0%), max = 72 us (0%), avg = 51 us (0%) Segmentation fault soekris:/usr/realtime/2.6.14-ski9-v1/testsuite/cruncher# run * * * Type ^C to stop this application. * * Calibrating cruncher...11769, done -- ideal computation time = 10018 us. 1000 samples, 1000 hz freq (pid=4260, policy=SCHED_FIFO, prio=99) Nanosleep jitter: min = 62 us, max = 195 us, avg = 79 us Execution jitter: min = 46 us (0%), max = 77 us (0%), avg = 57 us (0%) 2. send your results to xenomai.testout-at-gmail.com Obviously, an official gna.org ML might be more appropriate. Will appear soon. should this wait til xeno-test is upgraded to produce good data ? ie prevent early bogus data from being submitted. As said before, the problem that currently exists with LiveCD's data, is that the results are cripled with irrelevant stuff, either because some people just tried it out over a simulator (ahem...), or had a serious hw-generated latency issue that basically made the whole run useless (mostly x86 issues: e.g. SMI stuff, legacy USB emulation, powermgmt, cpufreq arte
[Xenomai-core] xeno-test etc
folks, Ive been tinkering with xeno-test, adding a bunch of platform-info to support comparison of results from various platforms submitted by different xenomai users. - cat /proc/config.gz if -f /proc/config.gz - cat /proc/cpuinfo - cat /proc/meminfo - cat /proc/adeos/* foreach /proc/adeos/* - cat /proc/ipipe/* foreach /proc/ipipe/* - xeno-config --v - xeno-info - (uname -a is available in xeno-config or xeno-info, dont need separately) However, Ive gotten a bit bogged down in the workload mgmt parts; they dont work quite the way Id like, and bash is tedious to do job control in scripts. What I want: support for 2 separate test-scenarios, described by the latency cmdln options: if ( -T X>0) workload job termination is detected and restarted. keeps workload conditions uniform for duration of test not needed for default workload - dd if=/dev/zero never finishes. needed for if=/dev/hda1, since partitions are finite. (real devices produce interrupts, so they make a better/harder test) if ( -w1 and -T 0 ) workload termination should end the currently running latency-test. runtime of latency test can be realistically compared to the same workload running normally. this sort-of turns the test inside-out; the workload becomes the 'goal' and the latency tests are the load. There are 2 conflicting forces (in GOF sense) driving my thinking wrt this script. - We want to support busybox , /bin/ash - we want the above features (which I havent gotten working in bash/ash yet) - Ash doesnt support several bash features, including at least 1 used in xeno-test (array vars) - we want more features ?? Given the tedium of fixing the bash-script bugs, I ended up prepping 2 new experiments: - ripped most bash code out, leaving only job-control stuff. tinkered with it, but it still has problems. - wrote an 'equivalent' (to above) perl version which does job-control (seems ok) perl version can run arbitrary bash loops also: not just 'dd if=/dev/zero of=/dev/null' but also ' while true; do echo hey $$; sleep 5; done' or ' cd ../../lmbench; make rerun; done' The ash version: AFAICT, the sticking point is waiting for work-load tasks; shell's wait is a blocking call, so I cant use it to catch individual workload exits, but I cant wait for all 3 workloads to end b4 restarting any of them. (load uniformity) trapping sig CHLD almost works; I cant recover the child pid in the handler, but perhaps I dont need it.. When I test using a dd workload, Im getting spurious signals, and the sig-handler dumbly restarts it, but wo the pid, its hard to know whether the signalling process is really dying, or something else ( which is partly what happens ) The bad behavior Im seeing now is that: the sig-handler fires evry 5 sec, in the while 1 { sleep 5 } loop. This suggests that Im missing something important wrt the signals. SO: 0. is the inside-out test scenario compelling ? 1. can anyone see whats wrong with the ash version ? 2. do I need an intermediate 'restart & wait' process to restart each (possibly finite) workload, so main process can wait on all its children together (block til they all return) 3. can somone see a simpler way ? 4. if the bash script cant be fixed (seems unlikely), do we want a perl version too ? 5. umm tia jimc PS. with all the hard work going on, I feel a bit lazy sending 2 semi-broken script-snippets, but.. well, I *am* lazy. Im also sending a semi-working version of xeno-test, as promised weeks ago. Pls dont apply, but give a look-see. One 'controversial' addition is POD, (plain old documentation). I think its readable as it is, and it has the virtue of not being in a separate file, so its easier to maintain. For a little flame-bait, I added -Z option, which gives extended help (-H is taken by latency). PPS. long options would be nice, but is unsupported by getopts. To use them, we'd need to do so in both xeno-test, and *latency progs, since xeno-test passes latency options thru when it invokes *latency. Anyone seen a version that does long options, and would work on ash & bash ? ok, enough prattling. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 91) +++ scripts/xeno-test.in(working copy) @@ -7,8 +7,8 @@ -w spawn N workloads (dd if=/dev/zero of=/dev/null) default=1 -d used as alternate src in workload (dd if=$device ..) The device must be mounted, and (unfortunately) cannot - be an NFS mount a real device (ex /dev/hda) will - generate interrupts + be an NFS mount. A real device (ex /dev/hda) will + generate interrupts, /dev/zero,null will not. -W
[Xenomai-core] [Fwd: Without joy or bitterness]
In testing the new list, Im probably jumping the gun, but at least the msg is on-topic, useful. Happy Birthday Philippe, lemme buy you a virtual beer. Original Message Subject:Without joy or bitterness Date: Sat, 08 Oct 2005 00:27:37 +0200 From: Philippe Gerum <[EMAIL PROTECTED]> Organization: Xenomai To: [EMAIL PROTECTED] CC: [EMAIL PROTECTED], Paolo Mantegazza <[EMAIL PROTECTED]> As some of you may have noticed reading this list during the last months, it's been a long time since Paolo and myself have agreed on any topic regarding RTAI. The crux of the problem is basically that DIAPM's (and as such Paolo's) goals for RTAI - as a project and as a technology - are no more compatible with the goals of the on-going "fusion" development effort. As a conclusion of this situation, the port once planned of the 3.x APIs over the fusion core has been canceled, albeit this was a cornerstone of the fusion effort, which would have paved the way to RTAI 4.x [1]. This step would have required the DIAPM people to help us - the fusion contributors - gradually extend the existing "compatibility" skin, so that all RTAI services to applications would have been eventually rebased on the fusion framework, whilst keeping full compatibility with the original 3.x APIs. This decision is a direct consequence of the inability Paolo and myself had to agree on key issues regarding this convergence process, and specifically the two following major requirements from the DIAPM, which are unfortunately unacceptable to fusion's major contributors and particularly myself as the maintainer of both the Adeos and fusion projects: #1 - the integration of significant portions of the existing 3.x kernel code into the fusion core, in order to guarantee by construction the same CPU footprints currently exhibited by the 3.x series, instead of performing a clean room implementation of the 3.x APIs as regular fusion skins. #2 - the integration of sideways into the official Adeos patch aimed at bypassing the pipelining code which actually implements the Adeos scheme. This requirement is a direct consequence of #1, since fusion is fundamentally based on the Adeos pipeline, but the optimal configuration of recent releases of 3.x now depends on those sideways. Firstly, the key design decision of the fusion architecture is to rely on an abstract RTOS core aka the "nucleus" inherited from the Xenomai project [2]. The standard interface exported by the nucleus is available for building any kind of real-time API personality aka "skin", which can be used in turn by the applications. The advantage of such architecture is basically that each and every available real-time personality contributes to exercise, debug and help optimizing a single generic core handling the basic real-time duties, which is in turn beneficial to all other real-time personalities depending on it. Unlike Paolo, I still think that such design would have allowed us to properly and efficiently emulate the existing 3.x APIs despite the additional abstraction layer, the same way a number of available fusion skins already emulate various traditional RTOS. At the contrary, a mix of both code bases seems to me the wrong approach, since 3.x and fusion designs are conflicting in many ways at core level. Secondly, Adeos has been explicitely contributed to the real-time Linux community as a patent-free mechanism for prioritizing interrupt delivery in the Linux kernel, based on a non-patented pipelining scheme [3], a feature which is critical to any real-time extension designed for operating in such context. Unlike Paolo, I see those sideways as being potentially harmful to its users, since they bluntly bypass what makes Adeos a patent-free implementation. Because this situation impedes any further development of RTAI 4.x the way we initially devised with Paolo, it also leads to have two competing - and not only diverging - code bases into the RTAI project, which is something that would only bring more confusion, without any upside for its users. I clearly understand that RTAI is DIAPM's project for developing DSP-style real-time support suitable for their own needs, and as such, my views as the maintainer of a development branch cannot indefinitely go against Paolo's vision for the whole project. For this reason, and as Paolo is already aware of, I see no future anymore for the fusion effort within the RTAI project, therefore I have decided to step down from the latter. This move de facto causes the classic RTAI branch (i.e. v3.x) to remain the single implementation of reference for the RTAI project. This said, I take this opportunity to thank Paolo regardless of our recent disagreements, for his confidence in my ability to help RTAI, first with the Adeos contribution, then as the initial 3.x maintainer. Hopefully, this work will have been useful to the RTAI community. Conversely, it is just fair to acknowledge that fusion owes the RTAI communi
Re: [Xenomai-core] Re: [syscall.c] rt_bind_queue/heap()
Philippe Gerum wrote: Dmitry Adamushko wrote: As you noticed below, the point is that this feature should be active for kernel-based code only; for user-space, we're toast: typical chicken-and-egg problem since we need the registry to cross the space boundaries but the registry requires a name to index the object first. So yes, we need to check for anonymous calls in every service taking a symbolic name in native/syscalls.c, and return -EINVAL when applicable. I thought that "libnative" would be a better place since this way we would avoid the user mode -> kernel mode switch. ...Or, we might auto-generate some dummy name in native/syscalls.c we would pass to the registry when this situation arises, so that anonymous creation and use from user-space would still be possible. Yep, in this case a name would be a string == object's address, thus it's unique. Ok, I'd probably vote for the 2-nd approach. Definitely better since this keeps the semantics consistent across execution spaces. maybe we should go as far as formalizing the "stringification" of a xenomai object as a URL: xeno_queue:0x45abc034 xeno_mutex:0xDEADBEEF or xeno:queue:0x0F00BAAB xeno:mutex: xeno:shared:TGID=100:0xdeadbeef it still feels a teeny bit hacky, but the url prefix at least makes its use explicit. In the last example, the url includes TGID=100, the idea being that it would only be valid for user-space processes that were thread-group 100. I dunno whether any such objects should get entries in /proc/ipipe/Xenomai* On one hand, it would seem a decent rendevous point, but not all objects should be globally visible, and its not clear to me which they are. Anyway, reading a /proc/ipipe/* file is a clumsy way to get addresses of xeno-objects to bind to. thx jimc
[Xenomai-core] [patch] xeno-config --verbose
attached patch gives xeno-config a --verbose option, ie: soekris:/usr/realtime/2.6.13-ski6-v1/bin# xeno-config --v xeno-config --verbose --version="2.0" --cc="gcc" --cross-compile="" --arch="i386" --subarch="" --prefix="/usr/realtime/2.6.13-ski6-v1" --config="/usr/realtime/2.6.13-ski6-v1/share/xenomai/config-xenomai-2.0" --kernel-cflags="-I. -I/usr/realtime/2.6.13-ski6-v1/include -D__XENO__ -ffast-math -mhard-float" aka --mod*-cflags --xeno-cflags="-I. -I/usr/realtime/2.6.13-ski6-v1/include -O2 -I/lib/modules/2.6.13-ski6-v1/build/include -D_GNU_SOURCE -D_REENTRANT -D__XENO__ -march=pentium-mmx -Wall -pipe -fstrict-aliasing -Wno-strict-aliasing" aka --fusion-cflags --xeno-ldflags="-L/usr/realtime/2.6.13-ski6-v1/lib -lpthread" aka --fusion-ldlags --posix-cflags="-I. -I/usr/realtime/2.6.13-ski6-v1/include -I/usr/realtime/2.6.13-ski6-v1/include/posix -O2 -I/lib/modules/2.6.13-ski6-v1/build/include -D_GNU_SOURCE -D_REENTRANT -D__XENO__ -march=pentium-mmx -Wall -pipe -fstrict-aliasing -Wno-strict-aliasing" --posix-ldflags="-L/usr/realtime/2.6.13-ski6-v1/lib -lpthread_rt -lpthread -lrt" --uvm-cflags="=-I. -I/usr/realtime/2.6.13-ski6-v1/include -O2 -I/lib/modules/2.6.13-ski6-v1/build/include -D_GNU_SOURCE -D_REENTRANT -D__XENO__ -march=pentium-mmx -Wall -pipe -fstrict-aliasing -Wno-strict-aliasing -D__XENO_UVM__ " --uvm-ldflags="=-u__xeno_skin_init -L/usr/realtime/2.6.13-ski6-v1/lib -luvm -lnucleus -lpthread" --mod*-dir="=/usr/realtime/2.6.13-ski6-v1/modules" --sym*-dir="/usr/realtime/2.6.13-ski6-v1/symbols" --libdir="/usr/realtime/2.6.13-ski6-v1/lib" --linux-dir="/lib/modules/2.6.13-ski6-v1/build" --linux-ver*="2.6.13" When called w/o args, it outputs the above, then prints the (current) usage message too. hth jimc Index: scripts/xeno-config.in === --- scripts/xeno-config.in (revision 22) +++ scripts/xeno-config.in (working copy) @@ -44,10 +44,10 @@ --subarch --prefix --config ---module-cflags ---module-cxxflags ---xeno-cflags ---xeno-ldflags +--module-cflags,--kernel-cflags +--module-cxxflags,--kernel-cxxflags +--xeno-cflags,--fusion-cflags +--xeno-ldflags,--fusion-ldflags --posix-cflags --posix-ldflags --uvm-cflags @@ -61,12 +61,44 @@ exit $1 } +verbose () +{ +echo xeno-config --verbose + +echo " " --version="\"${XENO_VERSION}\"" +echo " " --cc="\"$XENO_CC\"" +echo " " --cross-compile="\"$CROSS_COMPILE\"" +echo " " --arch="\"$XENO_TARGET_ARCH\"" +echo " " --subarch="\"$XENO_TARGET_SUBARCH\"" +echo " " --prefix="\"$XENO_PREFIX\"" +echo " " --config="\"$XENO_CONFIG\"" +echo " " --kernel-cflags="\"$XENO_KERNEL_CFLAGS\"" + +echo " " --xeno-cflags="\"$XENO_BASE_CFLAGS\"" +echo " " --xeno-ldflags="\"$XENO_BASE_LDFLAGS\"" +echo " " --posix-cflags="\"$XENO_POSIX_CFLAGS\"" +echo " " --posix-ldflags="\"$XENO_POSIX_LDFLAGS\"" +echo " " --uvm-cflags="\"=$XENO_UVM_CFLAGS \"" +echo " " --uvm-ldflags="\"=$XENO_UVM_LDFLAGS\"" + +echo " " --mod*-dir="\"=$XENO_MODULE_DIR\"" +echo " " --sym*-dir="\"$XENO_SYMBOL_DIR\"" +echo " " --libdir="\"$XENO_LIBRARY_DIR\"" +echo " " --linux-dir="\"$XENO_LINUX_DIR\"" +echo " " --linux-ver*="\"$XENO_LINUX_VERSION\"" +} + if test $# -eq 0; then +verbose $* usage 1 1>&2 fi while test $# -gt 0; do case "$1" in +--v|--verbose) +verbose $* +exit 0 +;; --version) echo ${XENO_VERSION} ;;
[Xenomai-core] Benchmarking Plan [Was: Partial roadmap]
Philippe Gerum wrote: This is a partial roadmap for the project, composed of the currently o Web site. Wiki ++ , eventually o Automated benchmarking. - We are still considering the best way to do that; actually, my take is that we would just need to bootstrap the thing and flesh it out over time, writing one or two significant benchmark tests to start with, choosing a tool to plot the collected data and push the results to some web page for public consumption on a regular basis, but so far, we did not manage to spark this. It's still in the short-term plan, though, because we currently have neither metrics nor data to check for basics, and we deeply need both of them now. ETA: Q4 2005. A Xenomai Automatic Benchmarking plan Goal is to test xenomai performance so we know when something breaks, test it thoroughly enough that we can see / identify systematic, generic, or platform specific bottlenecks. Benchmarking wrt bootstrap approach; scripts/xeno-test already runs 2 of 3 testsuite/* tests, and collects the results along with useful platform data. If new testsuite/* stuff gets added, its trivial to call them from xeno-test. Automatic Automating the process is trickier than usual, due to need for cross-compile (in some situations), NFS root mounts for remote boxes, remote or scripted reboots, etc. Ive cobbled up a rube-goldberg arrangement, which is out-of-scope for this message, will discuss all that separately. Characterization RPM mentioned plotting, I take that to mean heavy use of graphs to characterize and ultimately to predict xenomai performance over a range of criteria, for any given platform. LiveCD had the right idea wrt this - collecting platform info and performance data on any vanilla PC with a CD-ROM drive. And make this data available on a website, allowing users to compare their results with others done on similar platforms. LiveCD has a few weaknesses though: - cant test platforms w/o cdrom - manual re-entry of data is tedious, - no collection of platform data (available for automation) - spotty info about cpu, memory, mobo, etc - no unattended test (still true?) These things could be readily fixed, but xeno-test already does everything but the data upload. The real value of LiveCD was the collection of data across hundreds of different platforms, and its promise was that studying the data would reveal the secrets of better performance on any platform. A Plan (sort of) 1. xeno-test currently (patch pending) executes following commands, and captures output in a reasonably parseable format; a set of chunks: - uname -a - cat /proc/config.gz if -f /proc/config.gz - cat /proc/cpuinfo - cat /proc/meminfo - cat /proc/adeos/* foreach /proc/adeos/* - cat /proc/ipipe/* foreach /proc/ipipe/* - xeno-config --v - xeno-info The info captured is a fairly complete picture of the platform, it should support careful selection of data-sets for use in analysing, characterizing, and improving xenomai performance. Several chunks are collected optionally, ex config.gz. Although each chunk has some cost (config.gz kernels are larger, kernels with /proc/ipipe/Linux_stats are slower), Id encourage you to build your kernels with this stuff enabled, as it enriches the data. Besides, with baseline data collected, you can then accurately demonstrate each config-tweak's performance effect, and put it in a nice graph. also need these: - xenomai svn revision-level, perhaps as part of xeno-info,config ? - what else ? Anything added now is info-opportunity later - testsuite/cruncher ? 2. send your results to xenomai.testout-at-gmail.com Please run xeno-test, attach the resulting file(s), and send it to above address. This collects data now, we can decide where to host it when website is up. Obviously, an official gna.org ML might be more appropriate. # run something like this xeno-test -T300 -sh -w2 -L -N ~/xenotest-outputs/foo xeno-test will write all test output to a file: ~/xenotest-outputs/foo-$timestamp. The timestamp gives unique-ness, and you can choose which files 'look right' after inspecting several trial-runs FWIW - I could poach LiveCD code to upload to LiveCD site. That might be handy if it doesnt break the process that populates the data onto the web-page (which must parse for the data). 3. mail handler Ive previously written a mail-bot to do poll a pop-mbox, and collect attachments. I just need to dredge it out or rewrite it. Once I do, I'll just run it on that inbox to collect your results. Eventually, the data will be uploaded somewhere for everyone to peruse. If we go with a xenotest-results-at-gna.org, I can just subscribe my new acct to the new list :-) 4. xeno-test output parser Ive written a parser to chop the formatted output into chunks, and then parse some of those chunks into hashes. Soon Ill define some matching db-tables for the (well mannered) data 'well mannered' means lots of limitations atm;
Re: [Xenomai-core] Cosmetic changes to script xeno-config + man page
Romain Lenglet wrote: By the way, I noticed that the output of function usage() in that script was wrong. Here is a correct version, to replace the one if you see an error, please provide a proper patch. I cannot see the error youve 'corrected' below.. In fact, youve added one; -v doesnt work, --v does. in scripts/xeno-config.in: usage () { cat < thx jimc
[Xenomai-core] errors to console when running xeno-test (latency -t 1)
hello xenophiles, Im getting errors to the console when running latency -t 1. they also appear in dmesg output RTD| 16.615| 32.201| 44.845| 0| 14.054| 45.621 [ 917.477135] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 15.832| 32.166| 44.756| 0| 14.054| 45.621 [ 918.476991] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 14.542| 32.206| 44.098| 0| 14.054| 45.621 [ 919.476834] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 14.669| 32.172| 44.248| 0| 14.054| 45.621 [ 920.476706] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 15.326| 32.175| 43.045| 0| 14.054| 45.621 [ 921.476535] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 15.048| 32.216| 44.695| 0| 14.054| 45.621 [ 922.476384] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 15.344| 32.220| 52.680| 0| 14.054| 52.680 [ 923.476234] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 16.360| 32.206| 43.913| 0| 14.054| 52.680 [ 924.476085] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 14.384| 32.211| 44.275| 0| 14.054| 52.680 [ 925.475933] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 14.883| 32.181| 43.601| 0| 14.054| 52.680 [ 926.475783] invalid use of FPU in Xenomai context at 0xb7e488bc RTD| 14.684| 32.199| 44.587| 0| 14.054| 52.680 Ive attached a logfile, written by xeno-test. Hopefully it has enough detail for you to diagnose my problem ;-) (actually, there are probably trivial additions to xeno-test that would enhance the info it provides for debugging purposes forex: it currently greps XENO out of /proc/config.gz, if you identify other CONFIG_* items that are worth collecting, I'll add them. ( PREEMPT, MUTEX, ...). Or it can just cat /proc/config.gz to the log, and get everything. Also, my brand new 2.6.16-ipipe-121 kernel took quite a long time to boot. Ive seen this intermittently thru 2.6.1[45]-* series, mostly in -mm*, -rc* too (I think) but have never isolated any cause. So its probably my setup somehow... At some risk of running on too long, Ive noticed an oddity in dmesg output: do these large timestamps b4 zeroing matter ? soekris:/usr/xenomai/bin# dmesg |more [17179569.184000] Linux version 2.6.16-ipipe-121-sk ([EMAIL PROTECTED]) (gcc version 4.0.2 20051125 (Red Hat 4.0.2-8)) #4 Sun Mar 26 20:07:17 EST 2006 [17179569.184000] BIOS-provided physical RAM map: [17179569.184000] BIOS-e820: - 0009fc00 (usable) [17179569.184000] BIOS-e820: 0009fc00 - 000a (reserved) [17179569.184000] BIOS-e820: 000f - 0010 (reserved) [17179569.184000] BIOS-e820: 0010 - 0800 (usable) [17179569.184000] BIOS-e820: fff0 - 0001 (reserved) [17179569.184000] 128MB LOWMEM available. [17179569.184000] On node 0 totalpages: 32768 [17179569.184000] DMA zone: 4096 pages, LIFO batch:0 [17179569.184000] DMA32 zone: 0 pages, LIFO batch:0 [17179569.184000] Normal zone: 28672 pages, LIFO batch:7 [17179569.184000] HighMem zone: 0 pages, LIFO batch:0 [17179569.184000] DMI not present or invalid. [17179569.184000] Allocating PCI resources starting at 1000 (gap: 0800:f7f0) [17179569.184000] Built 1 zonelists [17179569.184000] Kernel command line: console=ttyS0,115200n81 root=/dev/nfs nfsroot=192.168.42.1:/nfshost/truck nfsaddrs=192.168.42.100:192.168.42.1:192.168.42.1:255.255.255.0:soekris:eth0 panic=5 initrd=initrd-2.6.16-ipipe-121-sk.img BOOT_IMAGE=vmlinuz-2.6.16-ipipe-121-sk [17179569.184000] Initializing CPU#0 [17179569.184000] PID hash table entries: 1024 (order: 10, 16384 bytes) [0.00] Detected 266.696 MHz processor. [ 20.833323] Using tsc for high-res timesource [ 20.833436] I-pipe 1.2-01: pipeline enabled. [ 20.833781] Console: colour dummy device 80x25 [ 20.956810] Dentry cache hash table entries: 32768 (order: 5, 131072 bytes) [ 20.966149] Inode-cache hash table entries: 16384 (order: 4, 65536 bytes) [ 21.009759] Memory: 126112k/131072k available (1602k kernel code, 4540k reserved, 741k data, 120k init, 0k highmem) Script started on Sun Mar 26 17:40:00 2006 running ./xeno-test -T 60 -h -s -l 0 Sun Mar 26 17:40:01 PST 2006 running: cat /proc/cpuinfo processor : 0 vendor_id : Geode by NSC cpu family : 5 model : 9 model name : Unknown stepping: 1 cpu MHz : 266.696 fpu : yes fpu_exception : yes cpuid level : 2 flags : fpu tsc msr cx8 cmov mmx cxmmx bogomips: 536.58 Sun Mar 26 17:40:01 PST 2006 running: cat /proc/meminfo MemTotal: 126264 kB MemFree:
Re: [Xenomai-core] errors to console when running xeno-test (latency -t 1)
Gilles Chanteperdrix wrote: Jim Cromie wrote: > hello xenophiles, > > Im getting errors to the console when running latency -t 1. > they also appear in dmesg output This should be fixed in revision 807. 807 didnt fix it here. Its also still present in 815 ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] errors to console when running xeno-test (latency -t 1)
Gilles Chanteperdrix wrote: Jim Cromie wrote: > Gilles Chanteperdrix wrote: > > Jim Cromie wrote: > > > hello xenophiles, > > > > > > Im getting errors to the console when running latency -t 1. > > > they also appear in dmesg output > > > > This should be fixed in revision 807. > > > > > 807 didnt fix it here. Its also still present in 815 Reverting 807 produces a bug on my test box identical to the one you are reporting. Merging 807 again make the bug disappear. Could you double check ? If the bug persists, could you send me privately your .config file ? yes, now works. My err was not recompiling the kernel too. sorry for the noise. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] bad/slow TSC hw detected by 2.6.17-rc1-mm1
FYI, Ive just built 17-rc1-mm1, and noted that the new time-keeping-system http://lwn.net/Articles/176837/ can now detect the buggy TSC on my GEODE-sc1100 cpu, and also that its apparently fixable by adding 'idle=poll' to kernel-boot-line. Apr 7 11:42:01 truck kernel: [ 19.160016] Kernel command line: console=ttyS0,115200n81 root=/dev/nfs nfsroot=192.168.42.1:/nfshost/truck nfsaddrs=192.168.42.100:192.168.42.1:192.168.42.1:255.255.255.0:soekris:eth0 panic=5 initrd=initrd-2.6.17-rc1-mm1-sk.img BOOT_IMAGE=vmlinuz-2.6.17-rc1-mm1-sk Apr 7 11:42:01 truck kernel: [ 24.314851] Time: tsc clocksource has been installed. Apr 7 11:42:01 truck kernel: [ 29.977802] TSC appears to be running slowly. Marking it as unstable Apr 7 11:42:01 truck kernel: [ 20.46] Time: pit clocksource has been installed. Apr 7 12:35:56 truck kernel: [ 21.562573] Kernel command line: console=ttyS0,115200n81 root=/dev/nfs nfsroot=192.168.42.1:/nfshost/truck nfsaddrs=192.168.42.100:192.168.42.1:192.168.42.1:255.255.255.0:soekris:eth0 panic=5 initrd=initrd-2.6.17-rc1-mm1-sk.img idle=poll BOOT_IMAGE=vmlinuz-2.6.17-rc1-mm1-sk Apr 7 12:35:56 truck kernel: [ 21.563049] using polling idle threads. Apr 7 12:35:56 truck kernel: [ 28.393469] Time: tsc clocksource has been installed. hope this is useful, jimc ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] bad/slow TSC hw detected by 2.6.17-rc1-mm1
Jan Kiszka wrote: Jim Cromie wrote: FYI, Ive just built 17-rc1-mm1, and noted that the new time-keeping-system http://lwn.net/Articles/176837/ can now detect the buggy TSC on my GEODE-sc1100 cpu, and also that its apparently fixable by adding 'idle=poll' to kernel-boot-line. Keeps your CPU safe and warm, I guess. ;) for me personally, its the new-found certainty that the bug is repeatedly observable and correctable :-) Ive historically had an issue with my ntp-server on this box, which slips badly when running latency tests under certain conditions (ie, the dd workload is using /dev/hda, a real interrupt source, rather than /dev/zero) FWIW, my 2.6.16-ipipe-122 kernel takes a *long* time to boot, even while printk numbers look good. The delays/pauses are in several subsystems, most notably after these dmesg lines: [ 30.768098] Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 [ 30.775141] ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx (1 min pause) [ 31.209246] hda: SanDisk SDCFB-512, CFA DISK drive [ 30.561093] Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled 30.574876] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 30.586719] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A (more delays, in various spots, and Im now much more aware of them..) FWIW, before now, Ive never needed idle=poll, so it seems that adeos/xenomai is more sensitive to these long delays than vanilla linux, (same for more recent versions of kernel & adeos). OTOH, Ive only recently added timestamps to printks, and Im also a bit more sensitive now than I once was .. Does it make any sense that adeos / xenomai is more sensitive to a bad TSC than it used to be ? thanks jimc ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] xeno-test: replace 'head -3' with 'head -n 3'
heres an untested (still, low risk) patch for xeno-test which corrects an obsolete usage of head. noted by Tobias Marschall on xeno-help. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 924) +++ scripts/xeno-test.in(working copy) @@ -90,7 +90,7 @@ loudly cat /proc/interrupts loudly cat /proc/loadavg [ -n "$prepost" ] && loudly $prepost -loudly top -bn1c | head -$(( 12 + $workload )) +loudly top -bn1c | head -n $(( 12 + $workload )) } ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] TROUBLESHOOTING additions, README tweaks
heres another try. contains some speculative explanations that perhaps warrant rewording. The regular reader will also detect some prose poached from this ML ;-) Index: TROUBLESHOOTING === --- TROUBLESHOOTING (revision 935) +++ TROUBLESHOOTING (working copy) @@ -6,6 +6,46 @@ GENERIC === +Q: Which CONFIG_* items are latency killers, and should be avoided ? + +A: Heres an enumeration. Several of these are discussed in greater +detail later. Feel free to verify that these cause latencies to +explode (xeno-test runs testsuite/latency 3 different ways), but keep +in mind that before you rely on the numbers, you must create workloads +that will excersize all the hardware used for your RT application. + +CONFIG_CPU_FREQ: This allows the CPU frequency to be modulated with +workload, but many CPUs change the TSC counting frequency also, which +makes it useless for accurate timing when the CPU clock can change. + +ACPI_PROCESSOR: ACPI is a complex BIOS functionality, and BIOS code is +never written with RT-latency in mind. If enabled, this BIOS code can +be invoked at a pseudo-SMI priority, which breaks the rule that +adeos-ipipe must be in charge of such things. DISABLE_SMI doesnt help +here (more later). + +PM & APM: Linux power management features also use the BIOS, so ACPI +comments apply here too. + +__ + +Q: How do I adequately stress-test ? + +A: xeno-test has a very basic workload generator, whose main virtue is +that its nearly universally available. + +dd if=/dev/zero if=/dev/null + +You can change the input device (-d /dev/hda1) to get real device +activity and interrupts, and/or -w 4 to run more workload tasks. For +more thorough testing, use -W . + +If you are looking for real heavy load: cache benchmarks tend to +stress your system most, http://www.cwi.nl/~manegold/Calibrator for +example. Combine them with heavy i/o load (flood ping etc.). + +__ + Q: My user-space application has unexpected latencies which seem to appear when transitioning from primary (Xenomai) to secondary (native Linux) real-time modes. Why? Index: README.INSTALL === --- README.INSTALL (revision 935) +++ README.INSTALL (working copy) @@ -74,12 +74,14 @@ Once the target kernel has been prepared, all Xenomai configuration options are available from the "Real-time subsystem" toplevel menu. -Once configured, the kernel should be built as usual. +There are several configure options that cause large latencies; they +should be avoided. The TROUBLESHOOTING file identifies them and +explains the issues with their use. Once configured, the kernel +should be built as usual. -It might be a good idea to put all the output into a different build -directory as to build from from linux source several targets. For each -target add O=../build- to each make invocation. See section 2.2 -for an example. +If you want several different configs/builds at hand, you can reuse +the same source by adding O=../build- to each make +invocation. See section 2.2 for an example. In order to cross-compile the Linux kernel, pass an ARCH and CROSS_COMPILE variable on make command line. See sections 2.2, 2.3 and @@ -105,7 +107,9 @@ albeit the kernel has been compiled with CONFIG_X86_TSC disabled would certainly lead to runtime problems if uncaught, since Xenomai and the application would not agree on the high precision clock to use for -their timings. +their timings. Furthermore, most of these issues cannot be probed for +during compilation, because the target generally has different +features than the host, even when theyre the same arch (ex 386 vs 686) In order to solve those potential issues, each Xenomai architecture port defines a set of critical features which is tested for @@ -126,8 +130,8 @@ kernel built with CONFIG_X86_TSC set, since the x86-tsc option's binding is strong. -1.3.2 Generic options -- +1.3.2 Generic configure options +--- NAME DESCRIPTION [BINDING,]DEFAULT(*) @@ -137,15 +141,17 @@ --enable-debug Enable debug symbols (-g) disabled --enable-smp Enable SMP support weak,disabled -1.3.3 Arch-specific options +1.3.3 Arch-specific configure options +- NAMEDESCRIPTION [BINDING,]DEFAULT(*) --enable-x86-sepEnable x86 SEP instructions strong,disabled -for issuing syscalls +for issuing syscalls. + You will also need NPTL --enable-x86-tscEnable x8
Re: [Xenomai-core] [RFC] collecting xenomai statistics
Niklaus Giger wrote: Hi Following a suggestion from Philippe Gerum I propose to collect and prepare like this: a) Make it easy to collect information add -s/-c option to xeno-test, help text would look like -s send output of xeno-test to [EMAIL PROTECTED] -c if -s, send also kernel config file to [EMAIL PROTECTED] attached patch adds new -m -M flags for xeno-test, (-s flag is taken, for statistics) former for a fixed addy (to be patched later), latter taking any email as arg. I didnt add -c , since xeno-test already does something similar; if you build with CONFIG_IKCONFIG_PROC=y, xeno-test greps XENO out of /proc/config.gz (probably needs a few more grep terms, and perhaps a -verbose mode which cats the whole thing.) The -M option works, since I just received an email Id sent earlier, but I also sent one to xenomai-core, and it hasnt shown up yet. I suspect that the mail looks like spam, and has been rejected, since my hostname is not a real FQDN. So Im not so sure that email is the best way here, but it is conceptually simple. ( back in Nov, I set up a gmail acct, and tried to fetch mail from it with a script. gmail wants TLS security, and didnt let me in, so I punted/shelved this. ) Niklaus' message brought me back on this topic. So Im considering poaching code from LiveCD that does url-encoding, or just using curl to post to some file-upload url. How would you do this if you only had busybox ?? Anyway, both email and url-upload are suboptimal wrt spam, latter is also a server support issue. (May be patch xeno-config to emit also the revision of the svn checkout?) perhaps as a 4th number on the version, that way xeno-config can stay as is. [EMAIL PROTECTED] bin]# ./xeno-config --version 2.1.50 Id like instead: 2.1.50.941 This seems better than pokinh around a filesystem, looking for the xenomai svn (which may be on the build-host, not the run-host) b) Setup an e-mail account [EMAIL PROTECTED] getting messages thru the anti-spam filters is the issue here, c) Add a archiver which generates daily a gzipped tar file of all messages ever sent to [EMAIL PROTECTED] (e.g. of its mbox). Make it available somewhere on the internet. or a daily/weekly digest/tarball d) Write a converter the raw messages into more suitable representation, eg. a MySQL-DB, a spreadsheet format. Extract the raw message and kernel config and store the publicly accessible on the internet. The DB/spreadsheet will contain pointers (URLs) to the raw message/kernel config. certainly not a bad idea. Will simplify collection/selection of data-sets for various things, esp more complicated selections (with ands, ors, etc). e) Write viewers which present interesting statistics. E.g. X/HTML pages to present a ordered (by architecture, board, version, etc) view of the available results. Ive done minimal dabbling with gnuplot, R, both have possibilities. With gnuplot, I tried graphing the RTD data, failed cuz theres no time column (couldnt figure out how to create/infer a synthetic 'index' column). It has some capability to select data out of files using awk,etc subcommands, but for complicated data files like xeno-test outputs (multiple sections, different formats), I think it (the selection, reformating capabilities) might be over-matched. R can apparently manage complex data-sets, and select data out of them. It sounds tremendously capable (after the learning curve) That said, Ive not grokked its use yet. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [PATCH] Check for NPTL and factor user-space skins initialization.
Gilles Chanteperdrix wrote: For review... +#ifndef __KERNEL__ +#include +#include +#include +#include + +static inline void xeno_x86_features_check(void) +{ +#ifdef CONFIG_XENO_X86_SEP + size_t n = confstr(_CS_GNU_LIBPTHREAD_VERSION, NULL, 0); + if (n > 0) + { since this is user code, its possible to read /proc/cpuinfo, and find the sep flag. Is this worth doing also ? if so, I can work a patch up later, unless you feel the urge. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] xeno-test email addition
Jim Cromie wrote: Niklaus Giger wrote: Hi Following a suggestion from Philippe Gerum I propose to collect and prepare like this: a) Make it easy to collect information add -s/-c option to xeno-test, help text would look like -s send output of xeno-test to [EMAIL PROTECTED] -c if -s, send also kernel config file to [EMAIL PROTECTED] attached patch adds new -m -M flags for xeno-test, (-s flag is taken, for statistics) former for a fixed addy (to be patched later), latter taking any email as arg. I didnt add -c , since xeno-test already does something similar; if you build with CONFIG_IKCONFIG_PROC=y, xeno-test greps XENO out of /proc/config.gz (probably needs a few more grep terms, and perhaps a -verbose mode which cats the whole thing.) The -M option works, since I just received an email Id sent earlier, but I also sent one to xenomai-core, and it hasnt shown up yet. I suspect that the mail looks like spam, and has been rejected, since my hostname is not a real FQDN. So Im not so sure that email is the best way here, but it is conceptually simple. Oof. Now attached. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 943) +++ scripts/xeno-test.in(working copy) @@ -17,6 +17,8 @@ -L writes to logfile (default "test-`uname -r`") (via script) -N same as -L, but prepend "$name-" (without -L, logname="$name-") prepending allows you to give a full path. + -m sends output file to [EMAIL PROTECTED] + -Msends output file to given addy # following options are passed thru to latency, klatency -s print statistics of sampled data (default on) @@ -136,8 +138,11 @@ logprefix= prepost= # command to run pre, and post test (ex ntpq -p) -while getopts 'd:shqT:l:H:B:uLN:w:W:p:' FOO ; do +email='[EMAIL PROTECTED]' +sendit= +while getopts 'd:shqT:l:H:B:uLN:w:W:p:mM:' FOO ; do + case $FOO in s|h|q) pass="$pass -$FOO" ;; @@ -166,6 +171,11 @@ p) prepost=$OPTARG loadpass="$loadpass -p '$OPTARG'" ;; + M) + email=$OPTARG + sendit=1 ;; + m) + sendit=1 ;; ?) myusage ;; esac @@ -179,6 +189,10 @@ # restart inside a script invocation, passing all date=`date +%y%m%d.%H%M%S` script -c "./xeno-test $loadpass $pass $*" "$logprefix$logfile-$date" +if [ $sendit == 1 ]; then + echo "mailing $logprefix$logfile-$date to $email" + mail -s 'xeno-test results' $email < "$logprefix$logfile-$date" +fi else if [ "$altwork" != "" ]; then mkload() { exec $altwork; } ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] /proc/cpuinfo missing sep - closure
just to follow up: I built a xeno-kernel for my laptop, the sep flag was on, so it was Fedora specific. I asked on fedora-list, got this answer: > This is pretty obscure, and I havent seen any problems because of it, > but it is a bit odd. > > Can someone(s) > - confirm its absence on 2.6.16-1.2069_FC4 or other > - check their FC-5 /proc/cpuinfo, and report back. > - explain why this is a good thing, or how it might have happened > accidentally ? SEP is incompatible with segment-based NX emulation provided by exec-shield in the Fedora kernel. The reason for this is that a SYSRET resets the segment limits. I presume this distinction is well hidden inside glibc etc.. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] readme.install & troubleshooting
heres another try, adjusting per Rodrigo's, Gilles' feedback, and filling in from linux Kconfig and wikipedia on ACPI. FYI, the latter is quite informative, esp on sleep states. The speculative content continues - I dont mind being wrong, esp when its temporary / corrected :-) I held back a bit on one wild conjecture, which connects to the xeno-stats effort / discussion. Ill start with a Q, and see where it goes: (feel free to change subject when replying ;-) Ive seen idle=poll boot-arg fix the Geode SC-1100's buggy TSC by preventing it from entering C1 state (conjecture-1 ? ;). Might this also 'fix' the latency problems caused by ACPI_PROCESSOR ? If so, it improves flexibility by a little, allowing some latency decisions to be made at boot time rather than compile time. Not quite as flexible as toggling linux-scheduler-idle behavior via sysctl, but it doesnt introduce new code either. I'll be testing this notion on my laptop (which has an ACPI BIOS) when time permits, but would appreciate guesses as to probability of useful results, tips, etc. Index: TROUBLESHOOTING === --- TROUBLESHOOTING (revision 946) +++ TROUBLESHOOTING (working copy) @@ -6,6 +6,57 @@ GENERIC === +Q: Which CONFIG_* items are latency killers, and should be avoided ? + +A: Heres an enumeration. Several of these are discussed in greater +detail in following sections. + +CONFIG_CPU_FREQ: This allows the CPU frequency to be modulated with +workload, but many CPUs change the TSC counting frequency also, which +makes it useless for accurate timing when the CPU clock can change. +Also some CPUs can take several milliseconds to ramp up to full speed. + +APM: The APM model assigns power management control to the BIOS, and +BIOS code is never written with RT-latency in mind. If configured, +APM routines are invoked with SMI priority, which breaks the rule that +adeos-ipipe must be in charge of such things. DISABLE_SMI doesnt help +here (more later). + +ACPI_PROCESSOR: For systems with ACPI support in the BIOS, this ACPI +sub-option installs an 'idle' handler that uses ACPI C2 and C3 +processor states to save power. The CPU must 'warm-up' from these +sleep states, increasing latency in ways dependent upon both the +BIOS's ACPI tables and code. You may be able to suppress the sleeping +with 'idle=poll' boot-arg, test to find out + +Summarizing, the latencies incurred here are dependent upon CPU, BIOS, +and motherboard; ie they're hard to predict, so we are conservative. +Feel free to test on your platform, (xeno-test runs testsuite/latency +in 3 modes), but keep in mind that before you rely on the numbers, you +must test with workloads that will exercise all the hardware used for +your RT application. + +__ + +Q: How do I adequately stress-test ? + +A: xeno-test has a very basic workload generator, whose main virtue is +that its nearly universally available. + +dd if=/dev/zero of=/dev/null + +You can change the input device (-d /dev/hda1) to get real device +activity and interrupts, and/or -w 4 to run more workload tasks. For +more thorough testing, use -W . + +If you are looking for real heavy loads, cache benchmarks tend to +stress your system the most, http://www.cwi.nl/~manegold/Calibrator +for example. Combine them with heavy i/o load (flood ping etc.) to +generate device interrupts. Also consider benchmarking software, such +as bonnie++, cpuburn, lmbench. + +__ + Q: My user-space application has unexpected latencies which seem to appear when transitioning from primary (Xenomai) to secondary (native Linux) real-time modes. Why? Index: README.INSTALL === --- README.INSTALL (revision 946) +++ README.INSTALL (working copy) @@ -74,12 +74,14 @@ Once the target kernel has been prepared, all Xenomai configuration options are available from the "Real-time subsystem" toplevel menu. -Once configured, the kernel should be built as usual. +There are several configure options that cause large latencies; they +should be avoided. The TROUBLESHOOTING file identifies them and +explains the issues with their use. Once configured, the kernel +should be built as usual. -It might be a good idea to put all the output into a different build -directory as to build from from linux source several targets. For each -target add O=../build- to each make invocation. See section 2.2 -for an example. +If you want several different configs/builds at hand, you can reuse +the same source by adding O=../build- to each make +invocation. See section 2.2 for an example. In order to cross-compile the Linux kernel, pass an ARCH and CROSS_COMPILE variable on make command line. S
Re: [Xenomai-core] [patch] xeno-test email addition
Philippe Gerum wrote: Romain Lenglet wrote: The -M option works, since I just received an email Id sent earlier, but I also sent one to xenomai-core, and it hasnt shown up yet. I suspect that the mail looks like spam, and has been rejected, since my hostname is not a real FQDN. So Im not so sure that email is the best way here, but it is conceptually simple. Oof. Now attached. Applied, thanks. Doh - small err. if [ $sendit = 1 ] needs quotes around the var, for when its undeffd. will patch soon. Im looking for other things to add, in part to prevent a changelog entry that says "fixed a dumb tested-new-but-not-old, so bug got out" entry :-/ - change the prewired -m email-addy ? (to [EMAIL PROTECTED]) - add From: header to get past subscriber-only check. should this be a prewired addy (forex [EMAIL PROTECTED]) or the user's name ( if so, which ENVAR should we use ? XENO_USER ?) - verbose (send whole config atm, perhaps others later) - run xeno-info, xeno-config - grep more config-items out of config (for non-verbose mode) latency-killers presumably, PREEMPT, others ? - NPTL availability (kinda overkill, since its absence when needed is already detected) Can anyone think of other possibly useful raw data ? I think Ive already got everything that LiveCD collected, plus some. thanks jimc PS. Is "Engines of Creation" another Joe Satriani album ? PPS.the dude can *rip* a fretboard. In "real-time" PPPS. I bet Philippe plays a mean air-guitar ;-) ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] a couple test failures on svn-current
This morning, after svn up, make, make install, I started getting this err. latency: failed to call RTBNCH_RTIOC_INTERM_RESULT, code -25 I fixed it by rebuilding kernel, which pulled in the updated kernel bits, and it worked. Is there any simple way to know apriori that a kernel-side make is needed ? ( I mean besides watching svn up for ksrc/* changes ;) No is a fine answer. Also, cyclictest is failing, not fixed by the kernel remake. soekris:/usr/xenomai/testsuite/cyclic# ./run * * * Type ^C to stop this application. * * 3.38 2.95 1.83 5/41 3590 T: 0 (0) P: 0 I:1000 C: 0 Min: 100 Act: 0 Max:-100 pthread_setschedparam: Invalid argument (modprobe xeno_posix?) soekris:/usr/xenomai/testsuite/cyclic# xeno-posix is in, so its something else. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] xeno-test updates [rfc patch]
Ive started adding to xeno-test, as outlined previously (plus some) - now run cyclictest and switch, in addition to latency -t 0,1,2 - changed the prewired -m email-addy to [EMAIL PROTECTED] will change again when something is formalized. In meantime, feel free to send results, Ill periodically bundle them, and/or just forward them automatically. - added `cat /proc/xenomai/* /proc/xenomai/*/*` to boxstatus(), which is run before and after latency tests. Note that /proc/ipipe/* is catted in boxinfo(), which is only run before, since the info doesnt change - moved cat /proc/meminfo to boxstatus, since it changes (a little bit) - now grep more config-items out of config (for non-verbose mode) latency-killers, PREEMPT, others ? - now fetch .config from /lib/modules/`uname -`/build, unless /proc/config.gz is there. - -v flag gets whole config (from either source) - boxinfo now runs xeno-info, xeno-config - added From: <[EMAIL PROTECTED]> to outgoing email. This probably isnt good enough, as mailservers generally check real sender IP against the given domain-name, and reject fakes. Forex, LKML rejected my attempts to do this there. gmail.com accepted my test msgs, but perhaps cuz they know me from my pop access. - patch also includes a tweak to prepare-kernel (chmod +w in patch_append) I added it cuz I like to use lndir to clone source trees (much smaller, and patch does the right thing - copies file, then modifies local copy. only issue is that it preserves the readonly of the original (preventing inadvertent touches), which breaks the append.) Not sure its universally safe, but it works here. Dont apply yet, not tested recently. Qs - should I run boxstatus just after latency tests or b4 and after (as currently) ? - /proc/xenomai/* contents are dynamic (ie run by boxstatus) ? - any bits of boxinfo and boxstatus that should be shuffled around ? - mail only works if -N andor -L are used. probably unnecessary limitation. (im lazy) - check NPTL availability (kinda overkill, since its absence when needed is already detected) - anything else come to mind ? Index: scripts/prepare-kernel.sh === --- scripts/prepare-kernel.sh (revision 957) +++ scripts/prepare-kernel.sh (working copy) @@ -48,6 +48,7 @@ patch_append() { file="$1" if test "x$output_patch" = "x"; then + chmod +w "$linux_tree/$file" cat >> "$linux_tree/$file" else if test `check_filter $file` = "ok"; then Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 957) +++ scripts/xeno-test.in(working copy) @@ -19,6 +19,7 @@ prepending allows you to give a full path. -m sends output file to [EMAIL PROTECTED] -Msends output file to given addy + -v verbose # following options are passed thru to latency, klatency -s print statistics of sampled data (default on) @@ -43,7 +44,7 @@ # run task after announcing it echo; date; echo running: $* -$* & +eval $* & # eval helps w complex cmds, like zegrep -E wait $! } @@ -77,20 +78,37 @@ unset dd_jobs; } -boxinfo() { -# static info, show once -loudly cat /proc/cpuinfo | egrep -v 'bug|wp' -loudly cat /proc/meminfo -[ -f /proc/config.gz ] && loudly zgrep XENO /proc/config.gz +boxinfo() { # static info, show once +loudly ./xeno-config -v +loudly ./xeno-info + +loudly cat /proc/cpuinfo # bogomips changes under CPU_FREQ + +# how much of the config do we want ? +local cmd="zgrep -E 'XENO|PREEMPT|CONFIG_ACPI|CONFIG_PM|CPU_FREQ'" +[ "$verbose" = 1 ] && cmd=cat + +if [ -f /proc/config.gz ]; then# get the config + loudly $cmd /proc/config.gz +elif [ -f /lib/modules/`uname -r`/build/.config ]; then + loudly $cmd /lib/modules/`uname -r`/build/.config +fi + [ -d /proc/adeos ] && for f in /proc/adeos/*; do loudly cat $f; done [ -d /proc/ipipe ] && for f in /proc/ipipe/*; do loudly cat $f; done } -boxstatus() { -# get dynamic status (bogomips, cpuMhz change with CPU_FREQ) +boxstatus() { # get dynamic status + loudly cat /proc/interrupts loudly cat /proc/loadavg +loudly cat /proc/meminfo + +if [ -d /proc/xenomai ]; then + for f in /proc/xenomai/*; do [ -f $f ] && loudly cat $f; done + for f in /proc/xenomai/*/*; do [ -f $f ] && loudly cat $f; done +fi [ -n "$prepost" ] && loudly $prepost loudly top -bn1c | head -n $(( 12 + $workload )) } @@ -105,12 +123,17 @@ boxstatus ( cd ../testsuite/latency - loudly ./run -- $opts -t0 loudly ./run -- $opts -t1 loudly ./run -- $opts -t2 +) +( cd ../testsuite/switch + loudly ./run -- '# switch' +) +( cd ../testsuite/cyclic + loudly ./run -- '# cycli
[Xenomai-core] Re: xeno-test updates [patch]
Jim Cromie wrote: Ive started adding to xeno-test, as outlined previously (plus some) , except where noted - now run cyclictest and switch, in addition to latency -t 0,1,2 cycletest now has decent options passed in. I havent given any thought to exposing options thru xeno-test's command line. Instead, Im thinking of adding statistics, ala latency. for that, Im also pondering a new -g 100 option to group the tests for stats-calcs, ie given: -g 100 -l 1000 -v it would compute statistics on 10 sets of 100 cycles, and report 10 lines. Again, this is notional, comments/feedback needed. - changed the prewired -m email-addy to [EMAIL PROTECTED] email options now work w/o actually writing a file. Also changed default location of file writes to /tmp, they no longer get written to $PWD by default added a -U , completely untested, but mostly lifted from LiveCD This looks necessary, since my hobby-box doesnt have a working mail setup my laptop (and presumably yours) doesnt have a FQDN, which pretty well precludes sending mail to anywhere useful. (Id bet we could span the unwashed winbloze masses, but wheres the sport in that ? ;-) - now grep more config-items out of config (for non-verbose mode) latency-killers, PREEMPT, others ? added items per RPMs email. Im considering stripping the warning issued when CPU_FREQ ia xonfig'd warning: CONFIG_CPU_FREQ=y may be problematic. I have it in cuz nothing actually changes (can change) it, so its harmless. (I think) Its easier than making the list complete, and the .config dump covers the reporting. Dont apply yet, not tested recently. Its reasonbly tested; we can shake out some more with some distributed testing (hint - try it !) Heres some tests I ran, files got written.. ./xeno-test -T 30 -l30 -m ./xeno-test -T 30 -l30 ./xeno-test -T 30 -l30 -L ./xeno-test -T 30 -l30 -N foo ./xeno-test -T 30 -l30 -LN bar ./xeno-test -T 30 -l30 -LN buzz -m ./xeno-test -T 5 -l30 -L -m ./xeno-test -T 5 -l30 -N /tmp/box- -m ./xeno-test -T 5 -l30 -N ~/trucklab/ -w2 -W 'dd if=/dev/hda1 of=/dev/null' Qs - should I run boxstatus just after latency tests or b4 and after (as currently) ? - /proc/xenomai/* contents are dynamic (ie run by boxstatus) ? - any bits of boxinfo and boxstatus that should be shuffled around ? - check NPTL availability (kinda overkill, since its absence when needed is already detected) - anything else come to mind ? these are still open, but not crtical. I hope thats everything for now, it needs a good shakedown, and I need a beer. Index: scripts/prepare-kernel.sh === --- scripts/prepare-kernel.sh (revision 974) +++ scripts/prepare-kernel.sh (working copy) @@ -48,6 +48,7 @@ patch_append() { file="$1" if test "x$output_patch" = "x"; then + chmod +w "$linux_tree/$file" cat >> "$linux_tree/$file" else if test `check_filter $file` = "ok"; then Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 974) +++ scripts/xeno-test.in(working copy) @@ -12,15 +12,18 @@ -W
Re: [Xenomai-core] Re: xeno-test updates [patch]
Philippe Gerum wrote: Jim Cromie wrote: cycletest now has decent options passed in. I havent given any thought to exposing options thru xeno-test's command line. Instead, Im thinking of adding statistics, ala latency. for that, Im also pondering a new -g 100 option to group the tests for stats-calcs, ie given: -g 100 -l 1000 -v it would compute statistics on 10 sets of 100 cycles, and report 10 lines. Again, this is notional, comments/feedback needed. This would be mainly useful for running different test scenarii - i.e. one per cycle? - I guess. But then, would not we have problems interpreting the results, since different testcases might lead to unrelated data sets? IOW, how would we use such data sets? Several observations led me to this idea. - in normal mode, the prog rewrites the same display line over and over it plays-back oddly when you more/cat the file. - with -v, it prints successive lines, but less info per line (no avg) which makes sense, since the avg is at the bottom. - 1000 lines of output is a boat-load, each is individually uninteresting / almost same as others. with latency, each line/second of the output contains the average of *many* samples 10,000 samples of 100uS measures IIRC, and the inner min,max,avg tell us about the high-frequency jitter,etc in the processes. Then the multiple samples tell us something about the low-freq jitter. IOW, we get a glimpse into the ergodicity of the noise (I say that, pretending I _understand_ ergodicity) Whether it applies / makes sense here, Im not at all sure. I hope thats everything for now, it needs a good shakedown, and I need a beer. Eh, I hope you had it by now, otherwise, you must be so damn thirsty... :o> :-) Ya gotta try this - simple, but highly addictive. A sport we can *all* play. http://www.wagenschenke.ch/site/homerun.htm ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Re: xeno-test updates [patch]
Jan Kiszka wrote: Jim Cromie wrote: Philippe Gerum wrote: Jim Cromie wrote: I hope thats everything for now, it needs a good shakedown, and I need a beer. Eh, I hope you had it by now, otherwise, you must be so damn thirsty... :o> :-) Ya gotta try this - simple, but highly addictive. A sport we can *all* play. http://www.wagenschenke.ch/site/homerun.htm Great thread, absolutely brilliant link! Just forwarded to our stag night crew for preparing the next weekend appropriately (we are going to rock Hamburg with the "victim"). :o) Jan I wanna party with you guys ! ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] xeno-test manpage patch
Romain Lenglet wrote: Added documentation for the new -U option, and did some cleanups. thanks I should have noted, this option isnt yet tested - I dont at this point have a server to test against. At some point I can -U , just need some tuits. (not that this changes anything - even if the script is broken, the feature is still workable) ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] patch - xeno-test: set useful default for latency runtimes
hi bump up the latency runtime default: -T 10, to something useful, but not too long: -T 120. also inform help-user of existing defaults, some of which are wired in latency itself. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 1005) +++ scripts/xeno-test.in(working copy) @@ -24,12 +24,12 @@ # following options are passed thru to latency -s print statistics of sampled data (default on) - -h print histogram of sampled data (default on) + -h print histogram of sampled data (default on, implies -s) -q quiet, dont print 1 sec sampled data (default on, off if !-T) - -T (default: 10 sec, for demo purposes) - -l - -H - -B + -T (default: 120 sec) + -l (default 21) + -H (default 100) + -B (default 1000) EOF # NB: many defaults are coded in latency exit 1 @@ -120,7 +120,7 @@ run_w_load() { local opts="$*"; -[ "$opts" = '' ] && opts='-q -s -T 10' +[ "$opts" = '' ] && opts='-q -s -T 120' boxinfo loudly generate_loads $workload ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] xeno-test manpage patch
Romain Lenglet wrote: Added documentation for the new -U option, and did some cleanups. FYI, a few *tiny* issues. -d eg: /dev/hda1 might be better than /dev/hda/, and drop trailing slash. -m, -M.log-file not required. -N path can be /absolute too, not just ../relative -p 'cmd' no longer run between latencies, just before & after pass-thru: -s -T 10 -qthe 10 is now 120. more subtly - your intro doesnt express the defaulting behavior that this section does describe. any pass-thru provided with turn off those defaults, so if you just use -h, the test will output lines each second, and never finish, so you dont get histogram (IIRC - you may get it from ^C handler). Im not at all sure its worth touching, and I should probably change the defaults to add histogram, and drop quiet. -h implies -s (its just easier that way) -- needed by testsuite/*/run scripts, not by xeno-test. I dont recall every using it. BUGS -N name is timestamped, giving uniqueness. This is a caveat, not a bug -p oops. Im open to suggestions whether this is worth fixing. (workload mgmt) workload tasks arent always (ever?) restarted once they finish, so a real /dev/hda1 workload may end before your test does, causing non-uniform & unexpected load variations. workloads arent always killed if test is interrupted. thanks, jimc ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] xeno-test manpage patch
Romain Lenglet wrote: BUGS -N name is timestamped, giving uniqueness. This is a caveat, not a bug No, it is a feature. I added it into the documentation of -L and -N. By the way, I suggest that you use a timestamp in the RFC 3339 format (a subset of ISO 8601): date "+%Y-%m-%d %H:%M:%S%:z" or, assuming date is GNU date: date --rfc-3339=seconds I wanted a compact rep. is there a std format that doesnt have a space ? its nicer for us xterm double-click cut-pasters. -p oops. Im open to suggestions whether this is worth fixing. (workload mgmt) workload tasks arent always (ever?) restarted once they finish, so a real /dev/hda1 workload may end before your test does, causing non-uniform & unexpected load variations. you used the default load in your explanation, which never ends ( /dev/zero supplies infinite 0s) One of these days Ill try to actually fix it. workloads arent always killed if test is interrupted. Added into the bugs section. Now, it is my turn to make some remarks on xeno-test... ;) 2- I don't know why there is a sort of option "-n" in the case: n) # accept note (from the outer process) notes=$OPTARG ;; Those three lines should be removed: this is dead code. Ack. vestigial cruft. was meant to pass info into the script as way to show in 1st lines how test was run. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] more xenotest tweaks
based in part on discussions and test-results from Niklaus Giger (thanks) heres another round of tweaks. * -p is changed to -P, which allows .. * -p to be passed thru to latency, exposing that capability * set -m default destination addr to <[EMAIL PROTECTED]> you _may_ have to subscribe in order to successfully send your results. I didnt touch <[EMAIL PROTECTED]>, thats pending.. * -Dallows user to change the timestamp format used for files written when -L -N are given on commandline. forex: $ bash -x xeno-test.in -N ./junk -D \--iso-8601=seconds yields Script started, file is ./junk-2006-05-07T18:16:14-0400 and bash -x xeno-test.in -N ./junk -D +%y%m%d.%S-foo yields junk-060507.28-foo the date-stamp echoed by loudly() shell-fn is _not_ subject to this arg, as the added variability might make parsing the file harder. OTOH, existing locale differences (forex between myself and Niklaus) may preclude close parsing anyway. Further tweaks here tbd. * dropped the check for "warning: CONFIG_CPU_FREQ=$CONFIG_CPU_FREQ may be problematic" because the check was very incomplete, and the config info is available for proper analysis * XENOTEST_OPTS envar is now read b4 commandline, so you can set your favorites there, then override them on cmdline. * added -s -h to default latency options, dropped -q if one is gonna bother to send the output, we want the data itself :-} * added 2>&1 into file-less -mailing branch. Niklaus' testruns exposed the lack of output. Im considering some juggling here, to use 'script' if available, something like: script -c "./xeno-test $loadpass $pass $*" | sendit this has advantage of capturing the invocation in a single line at top of the file ie the 'starting' line: Script started on Sam 06 Mai 2006 16:26:44 CEST creating workload using dd if=/dev/hda9 starting ./xeno-test -d /dev/hda9 This patch hasnt been properly tested, my test-box is currently busy, but I did run bash -vx xeno-test.in just to verify that it was ok to segregate option handling code into handle_options, then call it 2x from 2 while loops (1st for envar, 2nd for cmdline), and some of the other tweaks too. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 1029) +++ scripts/xeno-test.in(working copy) @@ -12,7 +12,7 @@ -W
[Xenomai-core] kconfig help questions
Going thru xenomai Kconfig again, some observations/uncertainties came up. I'll make a patch, given feedback. XENO_OPT_TIMING_PERIODIC "Aperiodic mode provides for non-constant delays between timer ticks," the wording here (non-constant delays) left me momentarily wondering if _APERIODIC was bad (this, despite the use of 'provides'). Maybe "sub-tick delays" makes some sense .. XENO_OPT_SHIRQ_LEVEL Are there any decent estimates or examples of the latency / jitter increases if this is enabled ? Is it highly cpu or chipset dependent ? I presume /proc/interrupts is the place to look to see if you might need it, iff any of the devices of interest are shared. my laptop shows sharing, my soekris does not 3: 5 XT-PIC ehci_hcd:usb1, ohci1394 4: 0 XT-PIC uhci_hcd:usb6 7: 708975 XT-PIC ipw2200, ehci_hcd:usb2, uhci_hcd:usb7 8: 1 XT-PIC rtc 9: 437509 XT-PIC acpi, Intel 82801DB-ICH4, Intel 82801DB-ICH4 Modem, ohci_hcd:usb3, ohci_hcd:usb4, uhci_hcd:usb5, yenta, eth0 # cat /proc/interrupts CPU0 0: 13148086 XT-PIC timer 2: 0 XT-PIC cascade 4: 33084 XT-PIC serial 8: 4 XT-PIC rtc 10: 337134 XT-PIC eth0 11: 926639 XT-PIC ndiswrapper 14: 11 XT-PIC ide0 NMI: 0 XENO_OPT_SHIRQ_EDGE does this only apply to ISA bus ? XENO_SKIN_NATIVE will add module name XENO_OPT_NATIVE_INTR Note that the preferred way of implementing generic drivers usable across all Xenomai interfaces is defined by the Real-Time Driver Model (RTDM). It doesnt say theyre (in)?compatible, should it ? SUMMARY Id like to see estimates of the latency costs associated with each choice, where its practical to do so (given the current unknowables, like variations across cpus, chipsets etc). I recognize that such numbers are the hoped-for end result of the xenomai-data ML, and perhaps nothing real can be said yet (or ever) in Kconfig, except in (overly?) broad statements. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] prepare-kernel shouldnt alter files thru links
attached patch corrects a mistake in rev 985, which chmod'd a read-only file, even if it was a symlink from a kernel-tree cloned with lndir. This resulted in a bad original tree for use in building vanilla kernels. with patch, script renames the symlink, copies it to the expected name, and *then* chmods it, and appends to it. Now vanilla kernel builds also using same tree re-make w/o actually recompiling anything. Index: scripts/prepare-kernel.sh === --- scripts/prepare-kernel.sh (revision 1029) +++ scripts/prepare-kernel.sh (working copy) @@ -48,6 +48,10 @@ patch_append() { file="$1" if test "x$output_patch" = "x"; then + if test -L "$linux_tree/$file" ; then + mv "$linux_tree/$file" "$linux_tree/$file.orig" + cp "$linux_tree/$file.orig" "$linux_tree/$file" + fi chmod +w "$linux_tree/$file" cat >> "$linux_tree/$file" else ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] [rfc] unit testing context switches.
Gilles Chanteperdrix wrote: Now that the big context switches bugs have been solved, here is a patch that adds a unit test for context switches and FPU switches with various type of threads (kernel, user, user in secondary mode, not using FPU, using FPU, etc...). As is the case of the latency test there is a small RTDM driver in kernel-space, put in the benchmark class, even though this test is for unit testing, not for benchmarking. The FPU switches need a small piece of code architecture dependent, put in , currently only implemented for x86. The kernel-space driver is called xeno_switchtest.ko, the user-space testing tool is called switchtest, because there is already a context switch benchmarking tool called "switch". does this maybe warrant a rename of both, to preclude the inevitable 'whats the difference between' Qs (sent or unsent) ? ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Porting xeno-{info|load|test} to a busybox system
Niklaus Giger wrote: Am Freitag, 26. Mai 2006 15:52 schrieb Jan Kiszka: Niklaus Giger wrote:... If anybody has a working target with a Xenomai + BusyBox combination and would be willing to test drive my changes, I would appreciate a feedback enormously. I hope this isnt waiting on my 'approval'. I think its a great idea, and has been on my (way too stagnant) list for a while. Your work has at least urged me to install busybox on my xeno-box. ;-) My only concern is whether we've sufficiently distinguished the issues: 1 - ash vs bash Its not entirely clear to me which flavors of sh busybox has: ash / dash / not-bash I gather u worked with ash, and it seems most valuable sh features work there just fine ( shell-functions, even 'job-control' of a fashion) 2 - busybox 'executables' only I coded in a lot of 'full linux' gimme's, like zgrep, script, etc. Niklaus has repaired many of these. I think a more thorough cleanup is possible, esp if things like 'script' are jettisoned for a simpler shell-functions or helper scripts. This all implies a re-write, which is on my list... (esp the job-control testing and repair) ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] ns vs. tsc as internal timer base
Philippe Gerum wrote: Gilles Chanteperdrix wrote: Philippe Gerum wrote: > Redone the check here on a Centrino 1.6Mhz, and still have roughly x20 > improvement (a bit better actually). I'm using Debian/sarge gcc 3.3.5. I think I remember that Pentium M has a much shorter mull instruction than other processors of the family. That would explain. Anyway, as John Stulz put it: "math is hard, lets go shopping!" Heh. Appropriate that his name (Stultz) comes up here, as his generic-time (GTOD) patchset looks headed for 2.6.18, bringing with it a full re-working of linux timers / timeofday. IN this new world, time is kept on free-running counters. Ive been running this patchset on my soekris for some time, since GTOD detects that the TSC counts slowly, calls it insane, and does timing with the PIT. With GTOD, writing a new clocksource driver is easy, enough so I could do it. My clocksource patch uses the 27 mhz timer on the Geode CPU. Once the TSC is de-rated, mine becomes the best clocksource, and GTOD switches to it. All of which is to say .. new mainline code is coming, should this current rework notion wait, given that its will all need revisited again later ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Porting xeno-{info|load|test} to a busybox system
Philippe Gerum wrote: Jan Kiszka wrote: Jim Cromie wrote: Niklaus Giger wrote: Am Freitag, 26. Mai 2006 15:52 schrieb Jan Kiszka: Niklaus Giger wrote:... If anybody has a working target with a Xenomai + BusyBox combination and would be willing to test drive my changes, I would appreciate a feedback enormously. I hope this isnt waiting on my 'approval'. I think its a great idea, and has been on my (way too stagnant) list for a while. Your work has at least urged me to install busybox on my xeno-box. ;-) My only concern is whether we've sufficiently distinguished the issues: 1 - ash vs bash Its not entirely clear to me which flavors of sh busybox has: ash / dash / not-bash I gather u worked with ash, and it seems most valuable sh features work there just fine ( shell-functions, even 'job-control' of a fashion) 2 - busybox 'executables' only I coded in a lot of 'full linux' gimme's, like zgrep, script, etc. Niklaus has repaired many of these. I think a more thorough cleanup is possible, esp if things like 'script' are jettisoned for a simpler shell-functions or helper scripts. This all implies a re-write, which is on my list... (esp the job-control testing and repair) Just stumbled over this again while cleaning up my mailbox. What's the status? Waiting for improvements, or waiting for /someone/ to type svn ci (and improve the topics above later)? It's queued for now, waiting for a combined ack to merge the current patch from JimC and Niklaus. AFAIC, Niklaus is in the lead atm. Im trying to get some GPIO stuff ready for -mm. ( I'll post separately on this ..) I ran his changes once, I dont even remember what it did. (which suggests that it didnt explode ;-) IMO, take it when Niklaus says its ready. I have some local changes here, but Ill work them into shape after Niks changes go in (maybe much later :-( We should probly confer on the longer-term issues too. - a rational option-pass-thru, or a means to avoid doing so. if we assume OPTS_${TOOLNAME} exists, we could grab it out of env, and pass it into the benchmark prog. - would require no prog mods, but gives us complete control - would play nicer than assuming -T has meaning for all progs. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] GPIO and RTDM
hi Jan, everyone, Ive worked up a patchset to add a GPIO driver for the chip on my mobo. I adapted an existing one, drivers/char/scx200_gpio, and created drivers/char/pc8736x_gpio When doing this, I _oversimplified_ my problem by disregarding RTDM, and Im hoping I can just _retrofit_ as needed. the chip is on an ISA bus, a user-space C program can read the pins at (this) rate: Wed Jun 14 13:24:13 MDT 2006 Linux soekris 2.6.17-rc6-gpio-sk #4 Sun Jun 11 20:43:10 MDT 2006 i586 GNU/Linux opened /dev/gpio-17, for 1 loops, 100 samples read 100 samples in 7.8434 sec, rate: 127494.9460 samples/sec opened /dev/led, for 1 loops, 100 samples read 100 samples in 5.4116 sec, rate: 184788.5056 samples/sec (obviously speed isnt latency, but theres some correlation ..) I dont actually have a Real Question, to I'll throw out a placeholder - What are the top 3-5 things to do or look at in order to check the compatibility of my patches with RTDM ? Separately.. In this GPIO work, I concluded that I needed to add a sysfs interface to my driver, in order to better fit with LKML expectations. What I did so far works, and seems to hang together coherently, but insofar as it is the 1st time (to my knowledge) that a uniform treatment has been tried, I might have painted myself / all-of-us into a corner. Hopefully not, but you folks have a keener perception of these things. Ill send shortly. tia jimc ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] Re: GPIO and RTDM
hi Jan, everyone, Separately.. In this GPIO work, I concluded that I needed to add a sysfs interface to my driver, in order to better fit with LKML expectations. Sysfs GPIO representation of Hardware (v0.2) (v0.1 went to lm-sensors ML, v0.2 to kernelnewbies ML) We need a standard rep for GPIO in sysfs, so heres a strawman. Strike a match, lets have a campfire! Essentially, this seeks to describes the directory of 'device-attribute-files' that are populated by a driver, forex: soekris:/sys/bus/platform/devices/pc8736x_gpio.0# ls bit_0.0_debounced bit_1.2_totem bit_2.5_pullup_enabled bit_0.0_locked bit_1.3_debounced bit_2.5_totem bit_0.0_output_enabled bit_1.3_locked bit_2.6_debounced bit_0.0_pullup_enabled bit_1.3_output_enabled bit_2.6_locked bit_0.0_totem bit_1.3_pullup_enabled bit_2.6_output_enabled bit_0.1_debounced bit_1.3_totem bit_2.6_pullup_enabled bit_0.1_locked bit_1.4_debounced bit_2.6_totem bit_0.1_output_enabled bit_1.4_locked bit_2.7_debounced bit_0.1_pullup_enabled bit_1.4_output_enabled bit_2.7_locked bit_0.1_totem bit_1.4_pullup_enabled bit_2.7_output_enabled bit_0.2_debounced bit_1.4_totem bit_2.7_pullup_enabled bit_0.2_locked bit_1.5_debounced bit_2.7_totem (Ive now seen *1.5* GPIO architectures, so please test this writeup mentally against your GPIO experience). Basic Naming Convention. I havent seen this stated anywhere at an 'all-of-sysfs-level' and I think its true/valid (and so test this here - CMIIW). If Im correct, please suggest the optimal Doc/* file to contain this info. All device-attr-files are named as __ in LM-sensors: - prefixsensor-type: in(volts), temp, fan, etc.. - idusually single integer - suffixthe sensor attribute in question. GPIO Prefix Names. Basically, GPIO hardware design appears to have 2 top-level factors; pin features, and pin-to-port grouping. These get mapped into filename prefixes & suffixes. All GPIOs (Ive seen) are organized as 1-4 ports of 8-32 bits. The bits' attributes are addressable individually, but also accessible as a group via the port_* files. If you change a bit-attribute, that change will also exhibit in the port attr too. IOW, we have bit_*, port_*. They are interconnected at the hardware level, and (I think) there is no need for inter-locks between the sysfs handlers for bit_ and port_ (except for shadow regs, but I digress) GPIO Architectures GPIO pins have lots of hardware / architectural / naming-convention variations, which makes this harder. Drivers should create sysfs 'files' only for attributes that are pertinent for the hardware being driven. Ths way, the absense or presense of files communicates functionality, as does their readonlyness. (these 'behaviors' may be different than lm-sensors) IOW: if a pin is input only, it shouldnt have an _output_enabled attr. if a pin is output only, it shouldnt have an _output_enabled attr. The reason for the 2nd rule: the presense of _output_enabled suggests that it can be changed. OTOH, a readonly _output_enabled would yield the same, but not as visibly (ls vs ls -l) So, Im somewhat ambivalent here, looking for input User-Space Following LM-sensors approach, a user-side library would add the niceties: - provide any equivalences needed by users ie bit_x_tristate = ! bit_x_output_enabled. - sub-port allocation and management. support for 3+3+2 bit sub-ports on an 8 bit port would be nice I suspect that a sophisticated programmer would be able to add a sub-port allocation facility w/in the driver. I cannot, GPIO Pin Features As alluded, pin features are represented as _ 1st: there are several alternative naming schemes: - name-as-verb _output_enable (conveys an 'action') - name-as-state _output_enabled (conveys a 'current state') - feature-name _output (a knob to turn) - feature+state _output+(currval) (currval in name is bad idea) 1,2 are quite close. Ive done 2. FWIW, heres the pin attributes of my GPIOs, as expressed in the syslog by the legacy drivers: (these are [15510.384000] pc8736x_gpio.0: io16: 0x0004 TS OD PUE EDGE LO io:0/1 [15510.564000] pc8736x_gpio.0: io17: 0x0004 TS OD PUE EDGE LO io:1/1 [15510.744000] pc8736x_gpio.0: io18: 0x0004 TS OD PUE EDGE LO io:1/1 [15510.928000] pc8736x_gpio.0: io19: 0x0004 TS OD PUE EDGE LO io:1/1 # whether output-drive is on/off _output_enable # 1 or 0, _tristate # ! _output_enable, logically linked. Now, theres no need to have both of these; if there were, they would have to be intrinsically linked (logically opposite values). IOW, drivers should name the file as one of possible states of the feature, which ever best describes it, and not expose it 2x. To the extent that we need support
[Xenomai-core] Re: GPIO and RTDM
Jan Kiszka wrote: Jim Cromie wrote: hi Jan, everyone, Ive worked up a patchset to add a GPIO driver for the chip on my mobo. I adapted an existing one, drivers/char/scx200_gpio, and created drivers/char/pc8736x_gpio When doing this, I _oversimplified_ my problem by disregarding RTDM, and Im hoping I can just _retrofit_ as needed. From a short glance at scx200_gpio: the only minor difference between registering and handling a Linux GPIO char-device and doing the same under RTDM will be the different naming. RTDM has no direct support for major/minor identification, it uses clear-text names for its devices. So you would have to create the device names on your own. Well, and some locking might be required (full preemptibility!), but this seems to apply to the Linux driver as well under certain kernel configs. But I wonder if it is clever for GPIO devices with a significant number of I/O lines to create a device node for each and every bit! Consider the usage scenario where you want to talk to some n-bit bus using GPIO lines. Would you like to open n devices and issue n writes just to put some n-bit value on that bus? That same Q had occurred to me. There are likely apps which can already do their own bit-masking, so dont need kernel support for single bits. It would be a trivial to allow either/both ports & bits via a mod-param. The bit-centrism is also a legacy of the device-file interface - the vintage driver has *no* port access support, unlike at least 1 out-of-tree driver (which has /proc iface) At this chance: Did you have a look at the comedi interface as well? It typically covers far more complexes data acquisition devices, but it should also be usable for simple digital I/O interfaces. Moreover, comedi is available for Linux for quite a while, and a RTDM port is on the way. I did look briefly, its device model felt more complex than I needed. A re-review is in order, now that I comprehend more than when I saw it last. If comedi means too much overhead for trivial I/O line manipulation, I would welcome any suggestion for a generic GPIO device profile - both mappable on RTDM and normal Linux character devices! the chip is on an ISA bus, a user-space C program can read the pins at (this) rate: Wed Jun 14 13:24:13 MDT 2006 Linux soekris 2.6.17-rc6-gpio-sk #4 Sun Jun 11 20:43:10 MDT 2006 i586 GNU/Linux opened /dev/gpio-17, for 1 loops, 100 samples read 100 samples in 7.8434 sec, rate: 127494.9460 samples/sec opened /dev/led, for 1 loops, 100 samples read 100 samples in 5.4116 sec, rate: 184788.5056 samples/sec (obviously speed isnt latency, but theres some correlation ..) I dont actually have a Real Question, to I'll throw out a placeholder - What are the top 3-5 things to do or look at in order to check the compatibility of my patches with RTDM ? Separately.. In this GPIO work, I concluded that I needed to add a sysfs interface to my driver, in order to better fit with LKML expectations. Err, sorry for not seeing this immediately even after (cross-)reading your second mail, but what will the sysfs interface be for? Heh - Im sure its not you, but the un-clarity.. Information, configuration? config - output_enable / tristate, pullup/no, totempole/opendrain, etc. both reading & writing, for bits, and for ports. current pin values, RW. Basically, Im imitating the way things are done by LM-sensors, where raw sensors are exposed via sysfs to a user-lib, which then does the (floating-point) conversion to units that are more meaningful to users. Do you see concrete usage scenarios for this? WRT the sysfs interface itself, no, not per-se. The device-file interface works fine, and is basically interchangable. The whole exersize was to anticipate whatever push-back might come from LKML. (I sent patch-set there today, so we'll see) My 'concrete' app/hobby is to strap a cheap 'embedded-pc' to a RC-car (radio-remote-control), have it read the PWM signals from the in-car receiver, and to duplicate them to a pair of pins wired to the servos. Once that works, the computer can train itself to the course driven by the remote-control user, then try to repeat the manuver. In order to 'read' the input, I *need* to use interrupts - polling the pins is ridiculous if the computer is to do anything else. Further, to 'follow' the input, I'll have to invert the triggering edge in the handler, so it can see both rising and falling edges of the 1-3 mS pulse, on 20 ms cycle. ( this could be handled by a $50 servo-controller, but wheres the fun in that !) Even if following the inputs is too hard, xenomai should be able to generate these signals. Presuming 256 discrete pulse widths in 1-3 ms range, each unit-time diff is ~ 8uS.On my board, this is less than peak latency jitter, approx equal the RMS jitter. It will be interesting to see if the servo
Re: [Xenomai-core] expected output and runtime of switchtest ?
Jan Kiszka wrote: Jim Cromie wrote: soekris:/usr/xenomai/testsuite/switchtest# modprobe xeno_switchtest [ 160.221018] Xenomai: starting RTDM services. soekris:/usr/xenomai/testsuite/switchtest# soekris:/usr/xenomai/testsuite/switchtest# switch -n soekris:/usr/xenomai/testsuite/switchtest# switch rtk0 cpu 0: 498 cpu 0: 998 cpu 0: 1498 cpu 0: 2000 cpu 0: 2496 cpu 0: 2998 cpu 0: 3498 cpu 0: 3998 cpu 0: 4500 ... This prog has been running for atleast 1/2 hr, with minimum args.. what should it be doing ? This is a regression test. You should see an error message or a system crash if something goes wrong. The output above looks ok (number of passed loops, kind of lifesign). OK thanks. FWIW, I noted that xeno-test is not running these: - switchbench - switchtest - irqbench Im not sure they belong in xeno-test though, since they dont appear to produce output that shows good vs bad performance, only an informal 'sanity' check. And technically, dont regression tests have to yield a PASS / FAIL decision ? ;-) Speaking more broadly, there are 3 possible kinds of test-progs - regression tests PASS / FAIL - either by exit(rc), or by printf( "%s\n", rc ? "not-ok" : "ok") see perl's regression test suite ( 100k separate tests ) usually test details, are not tutorial - performance tests progs stress xenomai, print performance data should help see performance problems, expose bugs xeno-test aims to collect performance data performance data must be expressive (switchtest is perhaps insufficient here) - examples / tutorials ex: satch.c simple, clear progs (low feature clutter, etc) Id like to see all demo/**/ progs in single dir forex satch-native, satch-vxworks, etc .. makes for easier browsing simple makefile builds out-of-tree handles kernel-modules and user-progs (Ive seen some clean ones, cant find now. Mine are crufty:-( 'patterns' of usage IWBGreat if we had common usage patterns isolated, named, and described Towards this last item, Ive done 2 things: - poached code from Hannes Mayer :-) http://www.captain.at/xenomai.php task-timers.c does periodic timer 3 ways: sleeper, waiter, alarm. - scrounged old rtai/fusion code (ls -l says Jul 05 ;-) cleaned up, 1/2 compile now maybe theres examples-tutorials-patterns fodder in here. attached tarball has these in 2 top-level dirs. Id like to see if theres a place for them long-term, and clean them up so theyre correct and helpful. Jan thanks -jimc xeno-examples-tuts.tgz Description: application/gzip ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch] update email addr
im trying to inline this patch, pls let me know if its still ws fouled (in thunderbird, cp from svn diff |less, preformat b4 paste) Index: CREDITS === --- CREDITS (revision 1412) +++ CREDITS (working copy) @@ -43,7 +43,7 @@ D: the map. N: Jim Cromie -E: [EMAIL PROTECTED] +E: [EMAIL PROTECTED] D: Comprehensive statistics collection for the testsuite. D: Validation test script. Various script fixes and sanitization. (END) ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patch-trivial ] fix a couple spelling errs
[EMAIL PROTECTED] xenomai]$ diff -u ./ksrc/arch/i386/Kconfig{~,} --- ./ksrc/arch/i386/Kconfig~ 2006-03-23 19:06:35.0 -0700 +++ ./ksrc/arch/i386/Kconfig2006-08-10 19:12:39.0 -0600 @@ -67,7 +67,7 @@ and hence may not be altered. For this reason, Xenomai contains code to detect chipsets using -SMIs and optionnaly activate some workarounds to stop SMIs. +SMIs and optionally activate some workarounds to stop SMIs. Enabling this option will cause Xenomai not to try and detect whether your hardware use SMIs. This option is mostly useful if you know @@ -98,7 +98,7 @@ and hence may not be altered. For this reason, Xenomai contains code to detect chipsets using -SMIs and optionnaly activate some workarounds to stop SMIs. +SMIs and optionally activate some workarounds to stop SMIs. Enabling this option cause those workarounds to be activated. if XENO_HW_SMI_WORKAROUND ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] [patches] 1-bundle-email-spelling-help 2-rfc-add-newer-bench-tests
Jan Kiszka wrote: Jim Cromie wrote: im trying to inline this patch, pls let me know if its still ws fouled (in thunderbird, cp from svn diff |less, preformat b4 paste) Index: CREDITS === --- CREDITS (revision 1412) +++ CREDITS (working copy) @@ -43,7 +43,7 @@ D: the map. N: Jim Cromie -E: [EMAIL PROTECTED] +E: [EMAIL PROTECTED] D: Comprehensive statistics collection for the testsuite. D: Validation test script. Various script fixes and sanitization. Obviously damaged, at least here on the list (leading single whitespace missing in unmodified lines). This also applies to your second patch. The required steps with Thunderbird are: 1. start a new mail or reply as HTML (bah!), 2. set the text style of everything to preformat, 3. switch back to "Plain Text Only" under Options/Format. Hope somebody will once hack a plugin for disabling line wrapping on demand without this dance... HRMM maybe copy-src is the factor - Ive had luck my way (which yours sounds like) when cpying the diff from an emacs window. No Matter ( machts nichts, or after americanization, MoxNix ) attachments are easier anyway, and RPM seems not to care either way. So, 2 patches: 1- bundle: email, Kconfig spelling, switchbench print in us, not ns. issues : - switchbench segfaults(iirc) for me - both before and after patch, so its not tested - and printf format purposely not tweaked, so as to give someone else the itch ;-) 2 - add newer benchmark tests to xeno-test RFC - irqbench (all 4 ways) - switchbench - segfaults - switchtest by the way, could we get renames ? switch{,bench,test}/switch.c is just unnecessarily confusing ! issues : - are tests ready to add ? perhaps, if optional ? - config dependence ignored we have no guarantee of .config availability (or do we?) - XENOT_* for tool(*) specific option setting from env warrants better prefix, but XENOTEST_TOPTS_* seemed too much thanks -jimc Index: src/testsuite/switchbench/switch.c === --- src/testsuite/switchbench/switch.c (revision 1416) +++ src/testsuite/switchbench/switch.c (working copy) @@ -38,8 +38,9 @@ static inline void add_histogram(long addval) { - long inabs = rt_timer_tsc2ns(addval >= 0 ? addval : -addval) / 1000; /* usec steps */ - histogram[inabs < HISTOGRAM_CELLS ? inabs : HISTOGRAM_CELLS - 1]++; + /* usec steps */ + long inabs = rt_timer_tsc2ns(addval >= 0 ? addval : -addval) / 1000; + histogram[inabs < HISTOGRAM_CELLS ? inabs : HISTOGRAM_CELLS - 1]++; } void dump_histogram(void) @@ -134,10 +135,10 @@ printf("RTH|%12s|%12s|%12s|%12s\n", "lat min", "lat avg", "lat max", "lost"); - printf("RTD|%12Ld|%12Ld|%12Ld|%12lld\n", - rt_timer_tsc2ns(minjitter), - rt_timer_tsc2ns(avgjitter), - rt_timer_tsc2ns(maxjitter), lost); + printf("RTD|%12.3f|%12.3f|%12.3f|%12lld\n", + rt_timer_tsc2ns(minjitter) / 1000, + rt_timer_tsc2ns(avgjitter) / 1000, + rt_timer_tsc2ns(maxjitter) / 1000, lost); if (do_histogram) dump_histogram(); Index: CREDITS === --- CREDITS (revision 1416) +++ CREDITS (working copy) @@ -43,7 +43,7 @@ D: the map. N: Jim Cromie -E: [EMAIL PROTECTED] +E: [EMAIL PROTECTED] D: Comprehensive statistics collection for the testsuite. D: Validation test script. Various script fixes and sanitization. Index: ksrc/arch/i386/Kconfig === --- ksrc/arch/i386/Kconfig (revision 1416) +++ ksrc/arch/i386/Kconfig (working copy) @@ -67,13 +67,13 @@ and hence may not be altered. For this reason, Xenomai contains code to detect chipsets using -SMIs and optionnaly activate some workarounds to stop SMIs. +SMIs and optionally activate some workarounds to stop SMIs. -Enabling this option will cause Xenomai not to try and detect whether +Enabling this option prevents Xenomai from detecting whether your hardware use SMIs. This option is mostly useful if you know that your system does not use SMIs and really want to size Xenomai modules down. The detection code has no run-time space overhead, -only disk-space overhead. +and a tiny memory footprint (<200 bytes on x86) config XENO_HW_SMI_DETECT bool @@ -98,7 +98,7 @@ and hence may not be altered. For this reason, Xenomai contains code to detect chipsets using -SMIs and optionnaly activate some workarounds to stop SMIs. +SMIs a
[Xenomai-core] Re: Test, benchmark, demo frameworks
Jan Kiszka wrote: Hi all, Jim raised these issues nicely to a generic level. I would like to pick it up and add some thoughts. Jim Cromie wrote: ... FWIW, I noted that xeno-test is not running these: - switchbench - switchtest - irqbench Im not sure they belong in xeno-test though, since they dont appear to produce output that shows good vs bad performance, only an informal 'sanity' check. Including switchtest depends on if xeno-test should also do some elementary stability tests. This can be derived from performance tests as well, but Gilles' switchtest does it for the various switching constellations more systematically. elementary stability says make it 1st test. before longer running latency tests. TBD.. Including irqbench is more tricky as "real" hardware and a second box are always involved here (so far it only works over null-modem, need to be extended to some GPIOs or parallel port). Regarding the output of the various benchmarks I would like to cite myself here: https://mail.gna.org/public/xenomai-core/2006-06/msg00195.html [And as one of the major xeno-test contributors, you may feel included by the term "test team". ;)] LOL at 2nd-to-last paragraph. wrt data collection, any updates on LTT or relayfs ? iirc LTT was split to create relayfs and LTT++, but the latter is WIP. With them, data-collection becomes comparatively limitless. also, Niklaus will be happy to hear I feel ownership (ie guilt) about a xeno-test bug where workloads get orphaned by middle 'workload-manager' shell not catching a terminating condition and cleaning up. Im not thrilled about bashing my way thru this jobctl problem, but I'll knuckle down someday (soon?), reduce it to an context-free bash script/apparatus for us to kick tires on busybox, etc.. Then fold into xeno-test and submit And technically, dont regression tests have to yield a PASS / FAIL decision ? ;-) Simple regular output is a good idea whenever the result is simple to express. A fatally crashing switch test due to broken support on arch XYZ will make it hard to issue "FAIL"... :) True, but that tells us something, doesnt it ? presume a regression test that prints this -- 1..4 ok 1 - Creating test program ok 2 - Test program runs, no error not ok 3 - infinite loop # TODO halting problem unsolved not ok 4 - infinite loop 2 # TODO halting problem unsolved we can know: - prog expects to complete 4 tests, and does so. (no segfault) - fails 2 of them - and which ones. http://jc.ngo.org.uk/trac-bin/trac.cgi/wiki/LibTap has a nice code sample, which, at its core, is: #include "tap.h" plan_tests(4); ok(0, "Creating test prog"); ok(some_function(), "Test program runs, no error\n"); ... I havent looked, but it looks purely header / macros / static inlines. Theres a full TAP model, but we can use just the basics. http://search.cpan.org/dist/Test-Harness/lib/Test/Harness/TAP.pod#Got_spare_tuits%3F One aspect we might reject is the rule about other print output starting with "# ". such other output is allowed by test harness, which complains otherwize. Speaking more broadly, there are 3 possible kinds of test-progs - regression tests PASS / FAIL - either by exit(rc), or by printf( "%s\n", rc ? "not-ok" : "ok") see perl's regression test suite ( 100k separate tests ) usually test details, are not tutorial Have you checked what is already under sim/skins/*/testsuite? I must confess I don't know if it is easily compilable for non-simulated execution as well. The best thing would be a test framework that builds both for the simulator and for "real" usage on the target. never have tried the sim, beyond 1,2x, punted on some dependency issues.. Obviously thats no longer sufficient :-} - performance tests progs stress xenomai, print performance data should help see performance problems, expose bugs xeno-test aims to collect performance data performance data must be expressive (switchtest is perhaps insufficient here) See my note above. I think some approach with a generic data collection suite + various data generators would be really fantastic! Just takes some brain(s) to design it and some hands to hack it... Taking the vision apart (for inspection), we have : - xeno-test - shell based, semi-primitive, captures logs while running: machine-factors probes/reports, and performance tests workload management (semi-broken) semi-functional data delivery service (environmental challenges) email delivery wont work for me, others w/o local mail set up. - Niklaus' ruby-on-rails ideas (his xeno-test++ code to list was tantalizing, but Ill admit, havent looked since :-( - big issue - server side availability - klive.org python based client &
[Xenomai-core] Re: Test, benchmark
Jan Kiszka wrote: Hi all, Jim raised these issues nicely to a generic level. I would like to pick it up and add some thoughts. Jim Cromie wrote: ... FWIW, I noted that xeno-test is not running these: - switchbench - switchtest - irqbench Im not sure they belong in xeno-test though, since they dont appear to produce output that shows good vs bad performance, only an informal 'sanity' check. Including switchtest depends on if xeno-test should also do some elementary stability tests. This can be derived from performance tests as well, but Gilles' switchtest does it for the various switching constellations more systematically. Including irqbench is more tricky as "real" hardware and a second box are always involved here (so far it only works over null-modem, need to be extended to some GPIOs or parallel port). just responding to small part now.. this patch adds switchtest, switchbench (and drops switch) and irqbench. each test-prog has a corresponding $XENOT_ with which you can inject new test arguments individually. Most of these can be undef'd, except for XENOT_IRQBENCH, which needs to be set in order for test to run (since the test requires additional resources, as you noted above) WRT switchtest, the -T option is useful, and makes its inclusion possible :-) xeno-test adds -T 120, which you can override as follows XENOT_SWITCHTEST='-T 300' # other useful ones XENOT_CYCLIC='-v' # make it verbose Fri Aug 18 07:06:20 MDT 2006 running: ./run -- -n -T 120 # switchtest * * * Type ^C to stop this application. * * [ 1574.162754] Xenomai: starting RTDM services. cpu 0: 2079 context switches. cpu 0: 4212 context switches. cpu 0: 6336 context switches. cpu 0: 8442 context switches. ... cpu 0: 246981 context switches. cpu 0: 249096 context switches. cpu 0: 250263 context switches. [ 1698.479703] Xenomai: stopping RTDM services. wrt the data emitted, what can we learn from the numbers ? They look to be increasing linearly, with some noise/perturbations. We could do some statistics, but whats useful ? Histogramming, averaging the delta-context-switches ? Also, I see from the help-text that it does many kinds of context switches. Does it make sense to run each kind for a bunch of samples, so that we can see # and variation for each kind of switch ? Other things (for other emails) 1 - one more stats/histogram suggests that it should be in a library or at least a separate object-file.Any thoughts / prefs / advice ? Perhaps a 'do it the way I did in ' 2 - I believe Ive tracked down xeno-test's problem cleaning up workloads. mkload is missing an 'exec', so the collected pid is that of an intermediate shell, which is either killed, or goes away by itself, leaving the actual workload reparented to init. Ive got the spawn-cleanup mechanics working in a separate script that works - a - it respawns tasks that have finished forex dd if=/dev/hda1 of=/dev/null will complete, since hda1 is a finite device ( unlike /dev/zero ) b - it kills the tasks it started before it exits. c - but needs more testing.. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 1453) +++ scripts/xeno-test.in(working copy) @@ -1,7 +1,9 @@ #! /bin/sh -# Adapted to be run also under the BusyBox. If you want to test it under the BusyBox use -# busybox sh xeno-test -# A BusyBox >= 1.1.3 with a make defconfig should provide all needed applets. + +# Adapted to be run also under the BusyBox. +# If you want to test it this way, do: sh xeno-test +# BusyBox >= 1.1.3 with a make defconfig should provide all needed applets. + myusage() { cat >&1 <___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Re: Test, benchmark
Gilles Chanteperdrix wrote: Jim Cromie wrote: > [ 1574.162754] Xenomai: starting RTDM services. > cpu 0: 2079 context switches. > cpu 0: 4212 context switches. > cpu 0: 6336 context switches. > cpu 0: 8442 context switches. > ... > cpu 0: 246981 context switches. > cpu 0: 249096 context switches. > cpu 0: 250263 context switches. > [ 1698.479703] Xenomai: stopping RTDM services. > > wrt the data emitted, what can we learn from the numbers ? > They look to be increasing linearly, with some noise/perturbations. > We could do some statistics, but whats useful ? > Histogramming, averaging the delta-context-switches ? > > Also, I see from the help-text that it does many kinds of context switches. > Does it make sense to run each kind for a bunch of samples, > so that we can see # and variation for each kind of switch ? No, because one of the threads in the chain of context switches is sleeping, otherwise the program would completely block your box. So, the figures are largely irrelevant, the only important thing about them is that they are increasing, it proves that the test is really switching contexts. But the test fails if the value stop increasing, so no, the output is useless. Note that if you add the -q option, the program will be silent and only print the final count of context switches. A question: I see that you always use the -n option, do you have problems running the test without this option ? When launched with the -n option switchtest does not test cpu context switches. I think I added it at some point when I wasnt getting output. It works without the -n too, which should be added via XENOT_SWITCHTEST, not stuffed in by default. You could just edit it out of patch, if otherwize satisfied... fwiw, Im not averse to expanding to XENOTEST_OPTS_, if you think thats better (its probly more self-explanatory when found in an .rc file) (its obviously trivial to redo the patch, lemme know) BTW, running without -n, and with -T 120 (same as before) I get more total context switches: [ 1075.064980] Xenomai: starting RTDM services. Testing FPU check routines... r0: 1 != 2 r1: 1 != 2 r2: 1 != 2 r3: 1 != 2 r4: 1 != 2 r5: 1 != 2 r6: 1 != 2 r7: 1 != 2 FPU check routines OK. cpu 0: 5727 context switches. cpu 0: 11477 context switches. cpu 0: 17227 context switches. cpu 0: 23000 context switches. ... cpu 0: 673164 context switches. cpu 0: 678914 context switches. cpu 0: 684664 context switches. cpu 0: 688620 context switches. [ 708.976879] Xenomai: stopping RTDM services. Offhand, that seems counterintuitive that -n gives lower growth-rate of total switches, since its equivalent to a shorter list of 'threadspec's Also, 2 possible output change requests: a - print per-sample measures, not accumulating ones. this is more consistent with latency, which prints the latencies seen over the 1-sec sample period This also feeds better into histogram, w/o adding 'delta' logic to histogrammer b - label output lines more in spirit of latency RTT| 00:00:01 (in-kernel periodic task, 100 us period, priority 99) RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst RTD| 4.733| 13.145| 18.518| 0| 4.733| 18.518 RTD| 4.868| 13.169| 24.735| 0| 4.733| 24.735 RTD| 5.014| 13.104| 36.270| 0| 4.733| 36.270 RTD| 4.905| 13.111| 36.394| 0| 4.733| 36.394 I have some misgivings about asking for this : - current output needs massaging, esp b4 feeding to gnuplot Someday, I'll sit down and write a script to reformat the data the way gnuplot wants it, then we'll have some idea whether current form is sub-optimal. - Ideally, we'd be using relayfs to collect data. More library-fodder ?? Lastly, we once had testsuite/README, I suppose it was dropped as being fatally-out-of-date. Presuming we want a new fresh one, could you add an empty file to svn, so that we can patch against it ? thanks jimc ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] Re: Test, benchmark
Jan Kiszka wrote: Jim Cromie wrote: ... just responding to small part now.. this patch adds switchtest, switchbench (and drops switch) and irqbench. each test-prog has a corresponding $XENOT_ with which you can inject new test arguments individually. Most of these can be undef'd, except for XENOT_IRQBENCH, which needs to be set in order for test to run (since the test requires additional resources, as you noted above) I doesn't necessarily require additional parameters (default is the first serial port on PCs). Why not adding a switch to xeno_test instead? Something like "-a " (e.g. "-a irqbench,whateverbench"). However, please don't forget to document this extension (man page?). Jan several reasons for my preference: - xeno-test is already cluttered with options, many of which propagate down to tests - as more tests are added, more option-clashing is inevitable. this makes pass-downs more complicated. - prog-specific options isolate us from option clashes / churn - assuming -a , need to handle multiples on line (I havent done that with shell getopts) I have to wonder if all shells have uniform getopt behavior - we want to work on bash, sh, ash, dash ... - I get to duck the manpage update ;-) ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] xeno-load: line 182: 2794 Floating point exception$suflag $* $cmdargs
I get the above error when running xeno-test, or when running individually, like so: soekris:/usr/xenomai/bin# ( cd ../testsuite/switchbench; loudly ./run -- -p 10 -n -l 1000 $XENOT_SWITCHBENCH '# switchbench'; ) Sun Aug 20 18:51:53 MDT 2006 running: ./run -- -p 10 -n -l 1000 # switchbench * * * Type ^C to stop this application. * * == Sampling period: 10 us == Do not interrupt this program /usr/xenomai/bin/xeno-load: line 182: 2868 Floating point exception$suflag $* $cmdargs but I dont get it when running it manually, w/o the ./run wrapper. soekris:/usr/xenomai/testsuite/switchbench# switchbench == Sampling period: 100 us == Do not interrupt this program RTH| lat min| lat avg| lat max|lost RTD| 25.568| 28.740| 45.584| 0 soekris:/usr/xenomai/testsuite/switchbench# soekris:/usr/xenomai/testsuite/switchbench# BTW, the line-number given above is a red-herring: 177 while test -n "$target_info" ; do 178 action=`echo $target_info|cut -d';' -f1` 179 target_info=`echo $target_info|cut -s -d';' -f2-` 180 set -- $action 181 182 case "$1" in 183 184 push) I stuffed in an strace, and got this: + test ./switchbench = ./switchbench + waitflag=1 + set -- ./switchbench + test 0 = 1 + test 1 = 1 + echo what ./switchbench -p 10 -n -l 1000 what ./switchbench -p 10 -n -l 1000 + strace ./switchbench -p 10 -n -l 1000 execve("./switchbench", ["./switchbench", "-p", "10", "-n", "-l", "1000"], [/* 14 vars */]) = 0 uname({sys="Linux", node="soekris", ...}) = 0 brk(0) = 0x804d000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f5d000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f5c000 open("/usr/xenomai/lib/tls/i586/cmov/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/tls/i586/cmov", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/tls/i586/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/tls/i586", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/tls/cmov/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/tls/cmov", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/tls/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/tls", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/i586/cmov/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/i586/cmov", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/i586/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/i586", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/cmov/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib/cmov", 0xbf8e55d4) = -1 ENOENT (No such file or directory) open("/usr/xenomai/lib/libpthread.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat64("/usr/xenomai/lib", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=16163, ...}) = 0 mmap2(NULL, 16163, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f58000 close(3)= 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/tls/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\260G\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=84977, ...}) = 0 mmap2(NULL, 70104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f46000 mmap2(0xb7f54000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd) = 0xb7f54000 mmap2(0xb7f56000, 4568, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f56000 close(3)= 0 open("/usr/xenomai/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/tls/libc.so.6", O_RDONLY)= 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240O\1"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1245680, ...}) = 0 mmap2(NULL, 1251484, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7e14000 mmap2(0xb7f3c000, 28672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x128) = 0xb7f3c000 mmap2(0xb7f43000, 10396, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f43000 close(3)= 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE,
[Xenomai-core] Re: xeno-load: line 182: 2794 Floating point exception$suflag $* $cmdargs
Jim Cromie wrote: * == Sampling period: 10 us == Do not interrupt this program /usr/xenomai/bin/xeno-load: line 182: 2868 Floating point exception$suflag $* $cmdargs soekris:/usr/xenomai/bin# gdb ../testsuite/switchbench/switchbench GNU gdb 6.4.90-debian Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i486-linux-gnu"...(no debugging symbols found) Using host libthread_db library "/lib/tls/libthread_db.so.1". (gdb) (gdb) run -p 10 -n -l 1000 Starting program: /usr/xenomai/testsuite/switchbench/switchbench -p 10 -n -l 1000 (no debugging symbols found) (no debugging symbols found) [Thread debugging using libthread_db enabled] [New Thread -1210206528 (LWP 2070)] (no debugging symbols found) (no debugging symbols found) == Sampling period: 10 us == Do not interrupt this program [New Thread -1210209360 (LWP 2073)] Program received signal SIGFPE, Arithmetic exception. [Switching to Thread -1210209360 (LWP 2073)] 0x0804acc7 in __divdi3 () (gdb) I tried stuffing -g into all *_CFLAGS in src/testsuite/Makefile, but I couldnt get it to show up in the compilation. Any hints ? ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Re: xeno-load: line 182: 2794 Floating point exception$suflag $* $cmdargs
got a backtrace Philippe Gerum wrote: On Mon, 2006-08-21 at 00:42 -0600, Jim Cromie wrote: Jim Cromie wrote: * == Sampling period: 10 us == Do not interrupt this program /usr/xenomai/bin/xeno-load: line 182: 2868 Floating point exception$suflag $* $cmdargs soekris:/usr/xenomai/bin# gdb ../testsuite/switchbench/switchbench I tried stuffing -g into all *_CFLAGS in src/testsuite/Makefile, but I couldnt get it to show up in the compilation. Try passing --enable-debug to the "configure" script. #0 0x0804ae57 in __divdi3 () #1 0x08049ac2 in worker (cookie=0x0) at switchbench.c:133 #2 0x0804a18f in rt_task_trampoline (cookie=0x0) at task.c:89 #3 0xb7f8107d in start_thread () from /lib/tls/libpthread.so.0 #4 0xb7f158fe in clone () from /lib/tls/libc.so.6 (gdb) Ill look deeper later, if its not obvious to the experts ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Re: xeno-load: line 182: 2794 Floating point exception$suflag $* $cmdargs
Gilles Chanteperdrix wrote: You likely have a division by 0 because nsamples is 0. nsamples come from the numeric argument of the -n option, and I think you do not pass a numeric argument to -n, so atoi returns 0. Yup, thats it. prog was getting -n -l 1000 -l is illegal option, but -n needs an arg, and -l became it, and converts to 0. so no error caused by -l either ! attached patch does: xeno-test: drop most options passed to switchbench,, added -h switchbench: - sanity check on nsamples. - 1st column labels on Histogram - compute statistics (sort of - I couldnt get 'sqrt' to link..) Also, I recall at one time, one of the testsuite progs was intended to be run in either xenomai, or in plain kernel. Is this still the case ? Or has it been superseded, forex by latency's -t [0-3] options ? If it is, should xeno-test run them that way as well ? FWIW, I always found that distinction too mysterious to not have an explicit option, along with errors explaining 'insufficient privilege to run in RT-mode' as necessary. Index: scripts/xeno-test.in === --- scripts/xeno-test.in(revision 1487) +++ scripts/xeno-test.in(working copy) @@ -200,7 +200,7 @@ loudly ./run -- -T 120 $XENOT_SWITCHTEST '# switchtest' ) ( cd `dirname $0`/../testsuite/switchbench - loudly ./run -- -p 10 -n -l 1000 $XENOT_SWITCHBENCH '# switchbench' + loudly ./run -- -h $XENOT_SWITCHBENCH '# switchbench' ) ( cd `dirname $0`/../testsuite/cyclic loudly ./run -- -p 10 -n -l 1000 $XENOT_CYCLIC '# cyclictest' Index: src/testsuite/switchbench/switchbench.c === --- src/testsuite/switchbench/switchbench.c (revision 1487) +++ src/testsuite/switchbench/switchbench.c (working copy) @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -43,17 +44,42 @@ histogram[inabs < HISTOGRAM_CELLS ? inabs : HISTOGRAM_CELLS - 1]++; } -void dump_histogram(void) +void dump_stats(double sum, int total_hits) { - int n; + int n; + double avg, variance = 0; - for (n = 0; n < HISTOGRAM_CELLS; n++) { - long hits = histogram[n]; - if (hits) - fprintf(stderr, "%d - %d us: %ld\n", n, n + 1, hits); - } + avg = sum / total_hits; + for (n = 0; n < HISTOGRAM_CELLS; n++) { + long hits = histogram[n]; + if (hits) + variance += hits * (n-avg) * (n-avg); + } + + /* compute std-deviation (unbiased form) */ + variance /= total_hits - 1; + // variance = sqrt(variance); + + printf("HSS| %9d| %10.3f| %10.3f\n", total_hits, avg, variance); } +void dump_histogram(void) +{ + int n, total_hits = 0; + double sum = 0; + fprintf(stderr, "---|---range-|---samples\n"); + for (n = 0; n < HISTOGRAM_CELLS; n++) { + long hits = histogram[n]; + if (hits) { + total_hits += hits; + sum += n * hits; + fprintf(stderr, "HSD| %d - %d | %10ld\n", + n, n + 1, hits); + } + } + dump_stats(sum, total_hits); +} + void event(void *cookie) { int err; @@ -180,8 +206,14 @@ } if (sampling_period == 0) - sampling_period = 10; /* ns */ + sampling_period = 10; /* ns */ + if (nsamples <= 0) { + fprintf(stderr, "disregarding -n <%lld>, using -n <100> us\n", + nsamples); + nsamples = 10; /* ns */ + } + signal(SIGINT, SIG_IGN); signal(SIGTERM, SIG_IGN); ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] ndiswrapper fails to 'ifup' on ipipe-1.3.10
any off-hand guesses as to what might have caused this ? ndiswrapper-1.x has worked fine with 1.3.9 and previous. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] ndiswrapper fails to 'ifup' on ipipe-1.3.10
Philippe Gerum wrote: On Wed, 2006-08-23 at 15:35 -0600, Jim Cromie wrote: any off-hand guesses as to what might have caused this ? ndiswrapper-1.x has worked fine with 1.3.9 and previous. Anything in dmesg? Oops. I tried and failed with 1.24rc1, then backed up to working 1.23, which failed. Bonehead probly forgot to rmmod before re ifup'g it. :-} 1.23 works fine here, on ipipe-1.3.10. Ill do a debug build of ndiswrapper, and see what it shows me. Sorry for the noise. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
[Xenomai-core] possible future conflict w LOCAL_APIC
hi guys, I encountered this error building 18-mm2 with a .config Ive been using with xenomai since I started. arch/i386/kernel/built-in.o(.text+0x34f1): In function `do_nmi': arch/i386/kernel/traps.c:752: undefined reference to `panic_on_unrecovered_nmi' arch/i386/kernel/built-in.o(.text+0x3564):arch/i386/kernel/traps.c:712: undefined reference to `panic_on_unrecovered_nmi' $ grep nmi arch/i386/kernel/Makefile obj-$(CONFIG_X86_LOCAL_APIC)+= apic.o nmi.o which I dont have enabled. Will fix. BTW I was planning to make LOCAL_APIC unconditional on i386 too like on x86-64. There is basically no reason ever to disable it, and the bug work around for buggy BIOS one can be done at runtime. Overall the #ifdef / compile breakage ratio vs saved code on disabled APIC code is definitely unbalanced. -Andi This looks like it may become a problem: Q: The kernel message log says: "Xenomai: Local APIC absent or disabled! Disable APIC support or pass "lapic" as bootparam." A: Xenomai sends this message if the kernel configuration Xenomai was compiled against enables the local APIC support (CONFIG_X86_LOCAL_APIC), but the processor status gathered at boot time by the kernel says that no local APIC support is available. There are two options for fixing this issue: o either your CPU really has _no_ local APIC hw, then you need to rebuild a kernel with LAPIC support disabled, before rebuilding Xenomai against the latter; Is this something fundamental or merely inconvenient ? ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] possible future conflict w LOCAL_APIC
Philippe Gerum wrote: On Thu, 2006-09-28 at 17:19 -0600, Jim Cromie wrote: [...] BTW I was planning to make LOCAL_APIC unconditional on i386 too like on x86-64. There is basically no reason ever to disable it, and the bug work around for buggy BIOS one can be done at runtime. Overall the #ifdef / compile breakage ratio vs saved code on disabled APIC code is definitely unbalanced. -Andi This looks like it may become a problem: Q: The kernel message log says: "Xenomai: Local APIC absent or disabled! Disable APIC support or pass "lapic" as bootparam." A: Xenomai sends this message if the kernel configuration Xenomai was compiled against enables the local APIC support (CONFIG_X86_LOCAL_APIC), but the processor status gathered at boot time by the kernel says that no local APIC support is available. There are two options for fixing this issue: o either your CPU really has _no_ local APIC hw, then you need to rebuild a kernel with LAPIC support disabled, before rebuilding Xenomai against the latter; Is this something fundamental or merely inconvenient ? Inconvenient because this would require some surgery, and a bit less efficient, since we would have to select the proper timing mode handlers dynamically through function pointers, right in the hot path (e.g. PIT programming in oneshot mode), not to speak of leaving dead code at runtime. Fortunately, this would be made simpler by the periodic-over-aperiodic mode emulation which is planned, since the periodic hw management code would go away as a result of such change. This said, the impact of forcing CONFIG_X86_LOCAL_APIC on would be limited to a handful of files, Xenomai-wise. Adeos-wise, this would have the same impact than for the rest of the x86 kernel code: less ifdefs, more dead code at runtime. update: Ingo Molnar has voiced strong preference that the config option remain, citing 60kb growth in its removal. So its probably safe for now ;-) ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Recommendation for ARM/blackfin based buildbot test machine
Niklaus Giger wrote: Hi Can anybody recommend a cheap (Anything below 200 US$ will probably fit into our family budget) ARM system, where I could test the ARM port of Xenomai? E.g. could I buy a LinkSys NSLU2 (around 100 Euros)? It seems that at least Debian is quite popular on it. Best regards http://gumstix.com/platforms.html has a range of *tiny* (the size of a stick of gum) ARM computers, shipped with linux installed, for $99 and up. Ive been meaning to get one, but havent had the time to hack at it after the purchase. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Recommendation for ARM/blackfin based buildbot test machine
Jim Cromie wrote: Niklaus Giger wrote: Hi Can anybody recommend a cheap (Anything below 200 US$ will probably fit into our family budget) ARM system, where I could test the ARM port of Xenomai? E.g. could I buy a LinkSys NSLU2 (around 100 Euros)? It seems that at least Debian is quite popular on it. Best regards http://gumstix.com/platforms.html has a range of *tiny* (the size of a stick of gum) ARM computers, shipped with linux installed, for $99 and up. Ive been meaning to get one, but havent had the time to hack at it after the purchase. IIUC, Intel's X-Scale processors (pxa255 in particular) are a derivation of ARM. ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Support for 2.6.22/x86
Philippe Gerum wrote: > Our development trunk now contains the necessary support for running > Xenomai over 2.6.22/x86. This work boils down to enabling Xenomai to use > the generic clock event device abstraction that comes with newest > kernels. Other archs / kernel versions still work the older way, until > all archs eventually catch up with clockevents upstream. > > This support won't be backported to 2.3.x, because it has some > significant impact on the nucleus. Tested as thoroughly as possible here > on low-end and mid-range x86 boxen, including SMP. > > Please give this hell. > > http://download.gna.org/adeos/patches/v2.6/i386/adeos-ipipe-2.6.22-rc6-i386-1.9-00.patch > > Ive been running 22-rc7 on my Sony VAIO with this patch applied since July 3, the only thing wrong Ive seen is that perl ( both bleed and maint ) is failing its time related regression tests. ie rsync -avz rsync://public.activestate.com/perl-current/ . rsync -avz rsync://public.activestate.com/perl-5.8.x . Failed 3 tests out of 1429, 99.79% okay. ../ext/Time/HiRes/t/HiRes.t ../lib/Benchmark.t op/time.t the lib/Benchmark.t test is hanging, and must be killed manually. These tests pass on Fedora - Linux harpo.jimc.earth 2.6.20-1.2962.fc6 #1 SMP Tue Jun 19 19:27:14 EDT 2007 i686 i686 i386 GNU/Linux thanks ___ Xenomai-core mailing list Xenomai-core@gna.org https://mail.gna.org/listinfo/xenomai-core
Re: [Xenomai-core] Support for 2.6.22/x86
Philippe Gerum wrote: > Hi Jim, > > On Sun, 2007-07-08 at 01:00 -0600, Jim Cromie wrote: > >> Philippe Gerum wrote: >> >>> Our development trunk now contains the necessary support for running >>> Xenomai over 2.6.22/x86. This work boils down to enabling Xenomai to use >>> the generic clock event device abstraction that comes with newest >>> kernels. Other archs / kernel versions still work the older way, until >>> all archs eventually catch up with clockevents upstream. >>> >>> This support won't be backported to 2.3.x, because it has some >>> significant impact on the nucleus. Tested as thoroughly as possible here >>> on low-end and mid-range x86 boxen, including SMP. >>> >>> Please give this hell. >>> >>> http://download.gna.org/adeos/patches/v2.6/i386/adeos-ipipe-2.6.22-rc6-i386-1.9-00.patch >>> >>> >>> >> Ive been running 22-rc7 on my Sony VAIO with this patch applied since >> July 3, >> >> the only thing wrong Ive seen is that perl ( both bleed and maint ) >> is failing its time related regression tests. >> ie >> rsync -avz rsync://public.activestate.com/perl-current/ . >> rsync -avz rsync://public.activestate.com/perl-5.8.x . >> >> Failed 3 tests out of 1429, 99.79% okay. >> ../ext/Time/HiRes/t/HiRes.t >> ../lib/Benchmark.t >> op/time.t >> >> the lib/Benchmark.t test is hanging, and must be killed manually >> > > Thanks for the feedback. > > Does this happen with the I-pipe switched off too? > Also, is Xenomai patched in your kernel with any of the skins statically > enabled, or just the I-pipe? > > Im not sure what 'switched off' means, so heres the 'relevant' parts of the .config Now that I look at it, Ive obviously not attended to the advice in README.INSTALL. This config worked nicely with ntp, FWIW. Given that this is a laptop, Id like to keep ACPI and/or CPU-freq if possible, but I'll try some combos to see if any work for both the battery, and for the perl tests. I'll try a conservative/recommended config too. Any suggestions or test-requests are welcome. vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.70GHz stepping: 6 cpu MHz : 1700.000 # # Real-time sub-system # # # WARNING! You enabled APM, CPU Frequency scaling or ACPI 'processor' # # # option. These options are known to cause troubles with Xenomai. # [EMAIL PROTECTED] linux-2.6.22-rc7-ipipe-190-sony]$ grep XENO .config CONFIG_XENOMAI=y CONFIG_XENO_OPT_NUCLEUS=y CONFIG_XENO_OPT_PERVASIVE=y # CONFIG_XENO_OPT_ISHIELD is not set CONFIG_XENO_OPT_PRIOCPL=y CONFIG_XENO_OPT_PIPELINE_HEAD=y CONFIG_XENO_OPT_PIPE=y CONFIG_XENO_OPT_PIPE_NRDEV=32 CONFIG_XENO_OPT_REGISTRY=y CONFIG_XENO_OPT_REGISTRY_NRSLOTS=512 CONFIG_XENO_OPT_SYS_HEAPSZ=128 CONFIG_XENO_OPT_STATS=y # CONFIG_XENO_OPT_DEBUG is not set # CONFIG_XENO_OPT_TIMING_PERIODIC is not set CONFIG_XENO_OPT_TIMING_SCHEDLAT=0 # CONFIG_XENO_OPT_SCALABLE_SCHED is not set CONFIG_XENO_OPT_TIMER_LIST=y # CONFIG_XENO_OPT_TIMER_HEAP is not set # CONFIG_XENO_OPT_TIMER_WHEEL is not set # CONFIG_XENO_OPT_SHIRQ_LEVEL is not set # CONFIG_XENO_OPT_SHIRQ_EDGE is not set CONFIG_XENO_HW_FPU=y # CONFIG_XENO_HW_NMI_DEBUG_LATENCY is not set # CONFIG_XENO_HW_SMI_DETECT_DISABLE is not set CONFIG_XENO_HW_SMI_DETECT=y # CONFIG_XENO_HW_SMI_WORKAROUND is not set CONFIG_XENO_SKIN_NATIVE=y CONFIG_XENO_OPT_NATIVE_PERIOD=0 CONFIG_XENO_OPT_NATIVE_PIPE=y CONFIG_XENO_OPT_NATIVE_PIPE_BUFSZ=1024 CONFIG_XENO_OPT_NATIVE_REGISTRY=y CONFIG_XENO_OPT_NATIVE_SEM=y CONFIG_XENO_OPT_NATIVE_EVENT=y CONFIG_XENO_OPT_NATIVE_MUTEX=y CONFIG_XENO_OPT_NATIVE_COND=y CONFIG_XENO_OPT_NATIVE_QUEUE=y CONFIG_XENO_OPT_NATIVE_HEAP=y CONFIG_XENO_OPT_NATIVE_ALARM=y CONFIG_XENO_OPT_NATIVE_MPS=y # CONFIG_XENO_OPT_NATIVE_INTR is not set CONFIG_XENO_SKIN_POSIX=y CONFIG_XENO_OPT_POSIX_PERIOD=0 # CONFIG_XENO_OPT_POSIX_SHM is not set # CONFIG_XENO_OPT_POSIX_INTR is not set CONFIG_XENO_OPT_DEBUG_POSIX=y # CONFIG_XENO_SKIN_PSOS is not set # CONFIG_XENO_SKIN_UITRON is not set # CONFIG_XENO_SKIN_VRTX is not set # CONFIG_XENO_SKIN_VXWORKS is not set # CONFIG_XENO_SKIN_RTAI is not set CONFIG_XENO_SKIN_RTDM=y CONFIG_XENO_OPT_RTDM_PERIOD=0 CONFIG_XENO_OPT_RTDM_FILDES=128 # CONFIG_XENO_DRIVERS_16550A is not set # CONFIG_XENO_DRIVERS_TIMERBENCH is not set # CONFIG_XENO_DRIVERS_IRQBENCH is not set # CONFIG_XENO_DRIVERS_SWITCHTEST is not set # CONFIG_XENO_DRIVERS_CAN is not set [EMAIL PROTECTED] linux-2.6.22-rc7-ipipe-190-sony]$ grep APM .config # WARNING! You enabled APM, CPU Frequency scaling or ACPI 'processor' # Power management options (ACPI, APM) # CONFIG_APM is not set