[Xenomai-core] Re: xeno-test updates [patch]

2006-04-22 Thread Jim Cromie

Jim Cromie wrote:




Ive started adding to xeno-test, as outlined previously (plus some)


, except where noted


- now run cyclictest and switch, in addition to latency -t 0,1,2


cycletest now has decent options passed in.
I havent given any thought to exposing options thru xeno-test's command 
line.


Instead, Im thinking of adding statistics, ala latency.
for that, Im also pondering a new -g 100 option to group the tests for 
stats-calcs,

ie given: -g 100 -l 1000 -v
it would compute statistics on 10 sets of 100 cycles, and report 10 lines.
Again, this is notional, comments/feedback needed.


-  changed the prewired -m email-addy to [EMAIL PROTECTED]

email options now work w/o actually writing a file.
Also changed default location of file writes to /tmp,
they no longer get written to $PWD by default

added a -U ,  completely untested, but mostly lifted from LiveCD
This looks necessary, since
   my hobby-box doesnt have a working mail setup
   my laptop (and presumably yours) doesnt have a FQDN,
   which pretty well precludes sending mail to anywhere useful.
 (Id bet we could span the unwashed winbloze masses, but wheres 
the sport in that ?  ;-)






- now grep more config-items out of config (for non-verbose mode)
  latency-killers, PREEMPT, others ?


added items per RPMs email.

Im considering stripping the warning issued when CPU_FREQ ia xonfig'd
warning: CONFIG_CPU_FREQ=y may be problematic.
I have it in cuz nothing actually changes (can change) it, so its 
harmless. (I think)

Its easier than making the list complete,
and the .config dump covers the reporting.



Dont apply yet, not tested recently.



Its reasonbly tested; we can shake out some more with some distributed 
testing

(hint - try it !)

Heres some tests I ran, files got written..

./xeno-test -T 30 -l30 -m
./xeno-test -T 30 -l30
./xeno-test -T 30 -l30 -L
./xeno-test -T 30 -l30 -N foo
./xeno-test -T 30 -l30 -LN bar
./xeno-test -T 30 -l30 -LN buzz -m
./xeno-test -T 5 -l30 -L -m
./xeno-test -T 5 -l30 -N /tmp/box- -m
./xeno-test -T 5 -l30 -N ~/trucklab/ -w2 -W 'dd if=/dev/hda1 of=/dev/null'



Qs

- should I run boxstatus just after latency tests or b4 and after (as 
currently) ?


- /proc/xenomai/* contents are dynamic (ie run by boxstatus) ?

- any bits of  boxinfo and boxstatus that should be shuffled around ?

- check NPTL availability (kinda overkill, since its absence when 
needed is already detected)



- anything else come to mind ?



these are still open, but not crtical.


I hope thats everything for now,
it needs a good shakedown, and I need a beer.
Index: scripts/prepare-kernel.sh
===
--- scripts/prepare-kernel.sh   (revision 974)
+++ scripts/prepare-kernel.sh   (working copy)
@@ -48,6 +48,7 @@
 patch_append() {
 file="$1"
 if test "x$output_patch" = "x"; then
+   chmod +w "$linux_tree/$file"
 cat >> "$linux_tree/$file"
 else
 if test `check_filter $file` = "ok"; then
Index: scripts/xeno-test.in
===
--- scripts/xeno-test.in(revision 974)
+++ scripts/xeno-test.in(working copy)
@@ -12,15 +12,18 @@
   -W 

[Xenomai-core] Re: [REQUEST] eliminate the rthal_critical_enter/exit() from rthal_irq_request()

2006-04-22 Thread Dmitry Adamushko
Hi,

I haven't had access to my laptop during this week so the patches
follow only now.

Yet another issue remains that may lead to a deadlock under some circumstances:

- rt_intr_delete() calls xnintr_detach() while holding the "nklock".

- xnintr_detach() (with CONFIG_SMP) spins in xnintr_shirq_spin()
when a corresponding ISR is currently running.

- now this ISR calls any function that uses "nklock" (everything make
use of it) ... Bum!

I first thought about moving xnintr_detach() out of the locked section
in rt_intr_delete() but it somewhat breaks interface consistency ---
e.g. xeno_mark_delete() is called before the object is completely
destroyed.

Another approach would be to introduce a service like
xnintr_synchronize() and enfource the upper interfaces (e.g. skins) to
make use of it in their _delete() methods.

The problem here is that the xnintr_shirq - interface is not complete
and safe without xnintr_shirq_spin() on detaching step and it becomes
somewhat blured with the enforcement like "on SMP systems one should
always call xnintr_synchronize() right after calling xnintr_detach()".

So I'm still thinking how to fix it better...


--
Best regards,
Dmitry Adamushko


hal.c-irqrequest-noglock.patch
Description: Binary data


core.c-virtualizeirq.patch
Description: Binary data


intr.c-noglock.patch
Description: Binary data
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core