[Xenomai-core] UVM build error in todays head of SVN

2006-05-01 Thread Wolfgang Grandegger

Hello,

I just checked out Xenomai and realized the UVM build error below (while 
building for PPC 405 with IPIPE tracer).  xnlock_put/get seems not to 
be dummy without SMP.


Wolfgang.


if ppc-linux-gcc -DHAVE_CONFIG_H -I. -I. -I../../../../src/include  -O2 
-D_GNU_SOURCE -D_REENTRANT -D__XENO__ -Wall -pipe -D__IN_XENO__ 
-Wstrict-prototypes -I../../../../include -D__XENO_UVM__-MT pod.o 
-MD -MP -MF .deps/pod.Tpo -c -o pod.o pod.c; \
then mv -f .deps/pod.Tpo .deps/pod.Po; else rm -f .deps/pod.Tpo; 
exit 1; fi

pod.c: In function 'xnpod_announce_tick':
pod.c:3304: warning: implicit declaration of function 'xnlock_get'
pod.c:3304: error: 'nklock' undeclared (first use in this function)
pod.c:3304: error: (Each undeclared identifier is reported only once
pod.c:3304: error: for each function it appears in.)
pod.c:3365: warning: implicit declaration of function 'xnlock_put'
make[4]: *** [pod.o] Error 1
make[4]: Leaving directory 
`/.automount/castor-vlab/root/home/wolf/rt/xenomai/src/skins/uvm/nucleus'

make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory 
`/.automount/castor-vlab/root/home/wolf/rt/xenomai/src/skins/uvm'

make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
`/.automount/castor-vlab/root/home/wolf/rt/xenomai/src/skins'

make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
`/.automount/castor-vlab/root/home/wolf/rt/xenomai/src'

make: *** [all-recursive] Error 1




___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xeno-test manpage patch

2006-05-01 Thread Jim Cromie

Romain Lenglet wrote:

doc fixes, skipped


BUGS

-N   name is timestamped, giving uniqueness.  This is a
caveat, not a bug



No, it is a feature. I added it into the documentation of -L 
and -N.
By the way, I suggest that you use a timestamp in the RFC 3339 
format (a subset of ISO 8601):

date +%Y-%m-%d %H:%M:%S%:z
or, assuming date is GNU date:
date --rfc-3339=seconds

  

I wanted a compact rep.
is there a std format that doesnt have a space ?
its nicer for us xterm double-click cut-pasters.


-p   oops.   Im open to suggestions whether this is worth
fixing.

(workload mgmt)

   workload tasks arent always (ever?) restarted once they
finish, so a real /dev/hda1 workload may end before your test
does, causing non-uniform 
unexpected load variations.


you used the default load in your explanation, which never ends ( 
/dev/zero supplies infinite 0s)

One of these days Ill try to actually fix it.


workloads arent always killed if test is interrupted.



Added into the bugs section.



Now, it is my turn to make some remarks on xeno-test... ;)
  
2-

I don't know why there is a sort of option -n in the case:
n)
# accept note (from the outer process)
notes=$OPTARG ;;
Those three lines should be removed: this is dead code.
  



Ack.  vestigial cruft. 
was meant to pass info into the script as way to show in 1st lines
how test was run. 


___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] I-pipe latency tracer for ppc

2006-05-01 Thread Wolfgang Grandegger

Philippe Gerum wrote:


As promised, the I-pipe tracer has been ported to ppc. People working on 
this architecture are invited to give it a try, it's a great tool to 
find out where the cycles are actually going.


Just apply the tracer patch on top of the Adeos patch bearing the same 
revision number, and select the feature from the kernel configuration. 
If you happen to have some issue while booting the instrumented kernel 
on embedded setups (e.g. freeze after kernel decompression), try 
reducing the size of the trace log (CONFIG_IPIPE_TRACE_SHIFT).


http://download.gna.org/adeos/patches/v2.6/ppc/tracer/


I gave it a try on a rather low-end PowerPC Walnut board (AMCC PowerPC 
405GP, Rev. E at 200 MHz, 16 kB I-Cache 8 kB D-Cache). A nice tool, 
indeed, and it works fine. I have attached the results from running the 
Xenomai latency -t0 test with some load showing latencies up to 140 
us. The trace seems not to show any obvious problems, I think.


Thanks.

Wolfgang.


bash-3.00# cat /proc/ipipe/version
1.3-00

bash-3.00# ./latency -p500
== Sampling period: 500 us
== Test mode: periodic user-mode task
== All results in microseconds
warming up...
...
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst
RTS|  59.020|  67.260| 138.910|   0|00:54:31/00:54:31

bash-3.00# cat /proc/ipipe/trace/max
I-pipe worst-case tracing service on 2.6.14/ipipe-1.3-00

Begin: 182095500 cycles, Trace Points: 192 (-10/+3), Length: 265 us
 
 +- Hard IRQs ('|': locked)
 |+ unused
 ||+--- unused
 |||+-- Xenomai
 +- Linux ('*': domain stalled)
 |   +-- Delay flag ('+':  1 us, '!':  10 us)
 |   |+- NMI noise ('N')
 |   ||
  Type   User Val.   TimeDelay  Function (Parent)
 *fn  -303.465  timer_interrupt+0x14 (__ipipe_do_timer+0x30)
 *fn  -262.430  profile_tick+0x14 (timer_interrupt+0x130)
 *fn  -241.865  profile_hit+0x14 (profile_tick+0x78)
 *fn  -221.700  update_process_times+0x14 (timer_interrupt+0x13c)
 *fn  -203.505  account_system_time+0x14 (update_process_times+0xac)
 *fn  -172.195  update_mem_hiwater+0x14 (account_system_time+0x78)
 *fn  -151.200  run_local_timers+0x14 (update_process_times+0xb0)
 *fn  -14   11.905  raise_softirq+0x14 (run_local_timers+0x30)
 *fn   -21.200  __ipipe_restore_root+0x14 (raise_softirq+0x84)
 *fn   -11.095  __ipipe_stall_root+0x14 (__ipipe_restore_root+0x34)
|   *begin  0x8000 0+   3.950  __ipipe_stall_root+0x98 (__ipipe_restore_root+0x34)
:|   *fn3+   1.500  scheduler_tick+0x14 (update_process_times+0x7c)
:|   *fn5+   4.040  sched_clock+0x14 (scheduler_tick+0x34)
:|   *fn9+   1.160  run_posix_cpu_timers+0x14 (update_process_times+0x84)
:|   *fn   10+   3.120  __ipipe_test_root+0x14 (run_posix_cpu_timers+0x44)
:|   *fn   13+   2.920  do_timer+0x14 (timer_interrupt+0x154)
:|   *fn   16+   1.500  profile_tick+0x14 (timer_interrupt+0x130)
:|   *fn   18+   1.140  profile_hit+0x14 (profile_tick+0x78)
:|   *fn   19+   1.270  update_process_times+0x14 (timer_interrupt+0x13c)
:|   *fn   20+   1.370  account_system_time+0x14 (update_process_times+0xac)
:|   *fn   21+   1.325  update_mem_hiwater+0x14 (account_system_time+0x78)
:|   *fn   230.995  run_local_timers+0x14 (update_process_times+0xb0)
:|   *fn   24+   1.140  raise_softirq+0x14 (run_local_timers+0x30)
:|   *fn   25+   1.350  __ipipe_test_and_stall_root+0x14 (raise_softirq+0x30)
:|   *fn   26+   1.205  __ipipe_restore_root+0x14 (raise_softirq+0x84)
:|   *fn   27+   1.735  __ipipe_stall_root+0x14 (__ipipe_restore_root+0x34)
:|   *fn   29+   1.160  scheduler_tick+0x14 (update_process_times+0x7c)
:|   *fn   30+   1.565  sched_clock+0x14 (scheduler_tick+0x34)
:|   *fn   32+   1.285  run_posix_cpu_timers+0x14 (update_process_times+0x84)
:|   *fn   33+   1.560  __ipipe_test_root+0x14 (run_posix_cpu_timers+0x44)
:|   *fn   35+   3.075  do_timer+0x14 (timer_interrupt+0x154)
:|   *fn   38+   1.460  profile_tick+0x14 (timer_interrupt+0x130)
:|   *fn   39+   1.300  profile_hit+0x14 (profile_tick+0x78)
:|   *fn   40+   1.035  update_process_times+0x14 (timer_interrupt+0x13c)
:|   *fn   41+   1.350  account_system_time+0x14 (update_process_times+0xac)
:|   *fn   43+   1.275  

Re: [Adeos-main] Re: [Xenomai-core] I-pipe latency tracer for ppc

2006-05-01 Thread Jan Kiszka
Wolfgang Grandegger wrote:
 Philippe Gerum wrote:

 As promised, the I-pipe tracer has been ported to ppc. People working
 on this architecture are invited to give it a try, it's a great tool
 to find out where the cycles are actually going.

 Just apply the tracer patch on top of the Adeos patch bearing the same
 revision number, and select the feature from the kernel configuration.
 If you happen to have some issue while booting the instrumented kernel
 on embedded setups (e.g. freeze after kernel decompression), try
 reducing the size of the trace log (CONFIG_IPIPE_TRACE_SHIFT).

 http://download.gna.org/adeos/patches/v2.6/ppc/tracer/
 
 I gave it a try on a rather low-end PowerPC Walnut board (AMCC PowerPC
 405GP, Rev. E at 200 MHz, 16 kB I-Cache 8 kB D-Cache). A nice tool,
 indeed, and it works fine. I have attached the results from running the
 Xenomai latency -t0 test with some load showing latencies up to 140
 us. The trace seems not to show any obvious problems, I think.
 
 Thanks.
 
 Wolfgang.
 
 
 
 
 
 bash-3.00# cat /proc/ipipe/version
 1.3-00
 
 bash-3.00# ./latency -p500
 == Sampling period: 500 us
 == Test mode: periodic user-mode task
 == All results in microseconds
 warming up...
 ...
 RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst
 RTS|  59.020|  67.260| 138.910|   0|00:54:31/00:54:31
 
 bash-3.00# cat /proc/ipipe/trace/max
 I-pipe worst-case tracing service on 2.6.14/ipipe-1.3-00
 
 Begin: 182095500 cycles, Trace Points: 192 (-10/+3), Length: 265 us
  
  +- Hard IRQs ('|': locked)
  |+ unused
  ||+--- unused
  |||+-- Xenomai
  +- Linux ('*': domain stalled)
  |   +-- Delay flag ('+':  1 us, '!':  10 
 us)
  |   |+- NMI noise ('N')
  |   ||
   Type   User Val.   TimeDelay  Function (Parent)
  *fn  -303.465  timer_interrupt+0x14 
 (__ipipe_do_timer+0x30)
  *fn  -262.430  profile_tick+0x14 
 (timer_interrupt+0x130)
  *fn  -241.865  profile_hit+0x14 (profile_tick+0x78)
  *fn  -221.700  update_process_times+0x14 
 (timer_interrupt+0x13c)
  *fn  -203.505  account_system_time+0x14 
 (update_process_times+0xac)
  *fn  -172.195  update_mem_hiwater+0x14 
 (account_system_time+0x78)
  *fn  -151.200  run_local_timers+0x14 
 (update_process_times+0xb0)
  *fn  -14   11.905  raise_softirq+0x14 
 (run_local_timers+0x30)
  *fn   -21.200  __ipipe_restore_root+0x14 
 (raise_softirq+0x84)
  *fn   -11.095  __ipipe_stall_root+0x14 
 (__ipipe_restore_root+0x34)
 |   *begin  0x8000 0+   3.950  __ipipe_stall_root+0x98 
 (__ipipe_restore_root+0x34)
 :|   *fn3+   1.500  scheduler_tick+0x14 
 (update_process_times+0x7c)
 :|   *fn5+   4.040  sched_clock+0x14 (scheduler_tick+0x34)

This trace covers some long stall of the root domain, not the maximum
stall of the Xenomai domain that caused the 140 us above. See the
explanation for the stall bits above.

Maybe the maximum irq latency tracing is broken, I haven't tested it
with latest ipipe patch revision yet. Well, that max path should not
trigger on root-domain stalls as long as there is some higher priority
domain running - and it should have been reset on xenomai mounting. Hmm,
maybe only that reset does not work, please check by running echo 
/proc/ipipe/trace/max before the latency test.

You could also test if the -f option of latency works for you. You will
find the result of the back-trace freezing on max latency under
/proc/ipipe/trace/frozen. The number of back-trace points can be tuned
even after the event (only post-tracing required ahead-of-time tuning).

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Adeos-main] Re: [Xenomai-core] I-pipe latency tracer for ppc

2006-05-01 Thread Wolfgang Grandegger

Jan Kiszka wrote:

Wolfgang Grandegger wrote:

Philippe Gerum wrote:

As promised, the I-pipe tracer has been ported to ppc. People working
on this architecture are invited to give it a try, it's a great tool
to find out where the cycles are actually going.

Just apply the tracer patch on top of the Adeos patch bearing the same
revision number, and select the feature from the kernel configuration.
If you happen to have some issue while booting the instrumented kernel
on embedded setups (e.g. freeze after kernel decompression), try
reducing the size of the trace log (CONFIG_IPIPE_TRACE_SHIFT).

http://download.gna.org/adeos/patches/v2.6/ppc/tracer/

I gave it a try on a rather low-end PowerPC Walnut board (AMCC PowerPC
405GP, Rev. E at 200 MHz, 16 kB I-Cache 8 kB D-Cache). A nice tool,
indeed, and it works fine. I have attached the results from running the
Xenomai latency -t0 test with some load showing latencies up to 140
us. The trace seems not to show any obvious problems, I think.

Thanks.

Wolfgang.





bash-3.00# cat /proc/ipipe/version
1.3-00

bash-3.00# ./latency -p500
== Sampling period: 500 us
== Test mode: periodic user-mode task
== All results in microseconds
warming up...
...
RTH|-lat min|-lat avg|-lat max|-overrun|lat best|---lat worst
RTS|  59.020|  67.260| 138.910|   0|00:54:31/00:54:31

bash-3.00# cat /proc/ipipe/trace/max
I-pipe worst-case tracing service on 2.6.14/ipipe-1.3-00

Begin: 182095500 cycles, Trace Points: 192 (-10/+3), Length: 265 us
 
 +- Hard IRQs ('|': locked)

 |+ unused
 ||+--- unused
 |||+-- Xenomai
 +- Linux ('*': domain stalled)
 |   +-- Delay flag ('+':  1 us, '!':  10 us)
 |   |+- NMI noise ('N')
 |   ||
  Type   User Val.   TimeDelay  Function (Parent)
 *fn  -303.465  timer_interrupt+0x14 
(__ipipe_do_timer+0x30)
 *fn  -262.430  profile_tick+0x14 
(timer_interrupt+0x130)
 *fn  -241.865  profile_hit+0x14 (profile_tick+0x78)
 *fn  -221.700  update_process_times+0x14 
(timer_interrupt+0x13c)
 *fn  -203.505  account_system_time+0x14 
(update_process_times+0xac)
 *fn  -172.195  update_mem_hiwater+0x14 
(account_system_time+0x78)
 *fn  -151.200  run_local_timers+0x14 
(update_process_times+0xb0)
 *fn  -14   11.905  raise_softirq+0x14 
(run_local_timers+0x30)
 *fn   -21.200  __ipipe_restore_root+0x14 
(raise_softirq+0x84)
 *fn   -11.095  __ipipe_stall_root+0x14 
(__ipipe_restore_root+0x34)

|   *begin  0x8000 0+   3.950  __ipipe_stall_root+0x98 
(__ipipe_restore_root+0x34)

:|   *fn3+   1.500  scheduler_tick+0x14 
(update_process_times+0x7c)
:|   *fn5+   4.040  sched_clock+0x14 (scheduler_tick+0x34)


This trace covers some long stall of the root domain, not the maximum
stall of the Xenomai domain that caused the 140 us above. See the
explanation for the stall bits above.


Ah, some feed for the real experts.



Maybe the maximum irq latency tracing is broken, I haven't tested it
with latest ipipe patch revision yet. Well, that max path should not
trigger on root-domain stalls as long as there is some higher priority
domain running - and it should have been reset on xenomai mounting. Hmm,
maybe only that reset does not work, please check by running echo 
/proc/ipipe/trace/max before the latency test.


Yes, that changed something. I have attached the new output listing.


You could also test if the -f option of latency works for you. You will
find the result of the back-trace freezing on max latency under
/proc/ipipe/trace/frozen. The number of back-trace points can be tuned
even after the event (only post-tracing required ahead-of-time tuning).


Seem to work. The output has been added to the attached file.

Wolfgang.

bash-3.00# cat /proc/ipipe/trace/max
I-pipe worst-case tracing service on 2.6.14/ipipe-1.3-00

Begin: 90358006525 cycles, Trace Points: 25 (-10/+4), Length: 69 us
 
 +- Hard IRQs ('|': locked)
 |+ unused
 ||+--- unused
 |||+-- Xenomai
 +- Linux ('*': domain stalled)
 |   +-- Delay flag ('+':  1 us, '!':  10 us)
 |   |+- NMI noise ('N')
 |   ||
  Type   User Val.   TimeDelay  Function (Parent)
  fn  -251.935  schedule+0x14 (syscall_exit_work+0x110)
  fn  -232.075  profile_hit+0x14 (schedule+0x68)
  fn  -212.390  sched_clock+0x14 (schedule+0xa0)
  fn

Re: [Adeos-main] Re: [Xenomai-core] I-pipe latency tracer for ppc

2006-05-01 Thread Jan Kiszka
Wolfgang Grandegger wrote:
 Jan Kiszka wrote:
 Wolfgang Grandegger wrote:
 ...
 This trace covers some long stall of the root domain, not the maximum
 stall of the Xenomai domain that caused the 140 us above. See the
 explanation for the stall bits above.
 
 Ah, some feed for the real experts.

Indeed :), comments below.

 

 Maybe the maximum irq latency tracing is broken, I haven't tested it
 with latest ipipe patch revision yet. Well, that max path should not
 trigger on root-domain stalls as long as there is some higher priority
 domain running - and it should have been reset on xenomai mounting. Hmm,
 maybe only that reset does not work, please check by running echo 
 /proc/ipipe/trace/max before the latency test.
 
 Yes, that changed something. I have attached the new output listing.
 
 You could also test if the -f option of latency works for you. You will
 find the result of the back-trace freezing on max latency under
 /proc/ipipe/trace/frozen. The number of back-trace points can be tuned
 even after the event (only post-tracing required ahead-of-time tuning).
 
 Seem to work. The output has been added to the attached file.
 
 Wolfgang.
 
 
 
 
 bash-3.00# cat /proc/ipipe/trace/max
 I-pipe worst-case tracing service on 2.6.14/ipipe-1.3-00
 
 Begin: 90358006525 cycles, Trace Points: 25 (-10/+4), Length: 69 us
  
  +- Hard IRQs ('|': locked)
  |+ unused
  ||+--- unused
  |||+-- Xenomai
  +- Linux ('*': domain stalled)
  |   +-- Delay flag ('+':  1 us, '!':  10 
 us)
  |   |+- NMI noise ('N')
  |   ||
   Type   User Val.   TimeDelay  Function (Parent)
   fn  -251.935  schedule+0x14 
 (syscall_exit_work+0x110)
   fn  -232.075  profile_hit+0x14 (schedule+0x68)
   fn  -212.390  sched_clock+0x14 (schedule+0xa0)
   fn  -194.400  __ipipe_stall_root+0x14 
 (schedule+0x118)
  *fn  -142.040  __ipipe_dispatch_event+0x14 
 (schedule+0x300)
  |   *begin  0x8001   -123.190  __ipipe_dispatch_event+0x240 
 (schedule+0x300)
  |   *end0x8001-91.990  __ipipe_dispatch_event+0x228 
 (schedule+0x300)
  *fn   -74.115  schedule_event+0x14 
 (__ipipe_dispatch_event+0x160)
  |   *begin  0x8001-31.835  __ipipe_dispatch_event+0x198 
 (schedule+0x300)
  |   *end0x8001-11.815  __ipipe_dispatch_event+0x1b8 
 (schedule+0x300)
 |   *begin  0x8000 0+   4.715  schedule+0x6ac 
 (syscall_exit_work+0x110)
 :|   *fn4+   1.225  __switch_to+0x14 (schedule+0x358)
 :|   *fn5+   5.025  __ipipe_test_and_stall_root+0x14 
 (__switch_to+0x30)
 :*fn   10+   1.445  schedule_tail+0x14 
 (ret_from_fork+0x60)
 :*fn   12+   3.450  __ipipe_unstall_root+0x14 
 (schedule_tail+0x48)

Instrumentation of the maximum IRQs-off path seems to be broken. The two
functions above were entered with hard-IRQs on, so they should have
interrupted the trace path.

 :|   *fn   15+   1.145  __ipipe_grab_timer+0x14 
 (__ipipe_ret_from_except+0x0)
 :|   *begin  0xc00466f416+   1.615  __ipipe_grab_timer+0x34 
 (__ipipe_ret_from_except+0x0)

Hmm, the User Value on x86 would contain the IRQ number here. Some
forgotten dereferencing? Looks like a pointer.

 :|   *fn   18+   3.665  __ipipe_handle_irq+0x14 
 (__ipipe_grab_timer+0x4c)
 :|   *fn   22+   1.980  __ipipe_walk_pipeline+0x14 
 (__ipipe_handle_irq+0x174)
 :|   *fn   24+   2.540  ipipe_suspend_domain+0x14 
 (__ipipe_walk_pipeline+0xe4)
 :|   *fn   26+   3.600  __ipipe_sync_stage+0x14 
 (ipipe_suspend_domain+0xa8)
 :|  **fn   30+   1.650  xnintr_clock_handler+0x14 
 (__ipipe_sync_stage+0x1ac)
 :|  **fn   31+   2.150  xnintr_irq_handler+0x14 
 (xnintr_clock_handler+0x38)
 :|  **fn   34+   1.980  xnpod_announce_tick+0x14 
 (xnintr_irq_handler+0x58)
 :|  **fn   36+   3.395  xntimer_do_tick_aperiodic+0x14 
 (xnpod_announce_tick+0x3c)
 :|  **fn   39+   1.645  xnthread_periodic_handler+0x14 
 (xntimer_do_tick_aperiodic+0x2b4)
 :|  **fn   41+   1.420  xnpod_resume_thread+0x14 
 (xnthread_periodic_handler+0x3c)
 :|  **fn   42+   4.010  
 ipipe_test_and_stall_pipeline_head+0x14 (xnpod_resume_thread+0x34)
 :|  **fn   46+   4.910  ipipe_restore_pipeline_head+0x14 
 (xnpod_resume_thread+0x120)
 :|  **fn   51+   1.870  xnpod_schedule+0x14 
 (xnintr_irq_handler+0xf0)
 :|  **fn   53+   9.430  
 ipipe_test_and_stall_pipeline_head+0x14 (xnpod_schedule+0x80)
 :|  **fn

Re: [Xenomai-core] xeno-test manpage patch

2006-05-01 Thread Romain Lenglet
Jim Cromie wrote:
  By the way, I suggest that you use a timestamp in the RFC
  3339 format (a subset of ISO 8601):
  date +%Y-%m-%d %H:%M:%S%:z
  or, assuming date is GNU date:
  date --rfc-3339=seconds

 I wanted a compact rep.
 is there a std format that doesnt have a space ?
 its nicer for us xterm double-click cut-pasters.

Replace the space with a T: this is still an ISO 8601 format.

-- 
Romain LENGLET

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core