Li Yi (Adam) wrote: > Hi Jan and Philippe, > ...
>
> Also attached the new ipiper-tracer log.
Damn, no fitting reference trace at hand, but something seems to be
fishy. See comments/questions below.
> ------------------------------------------------------------------------
>
> root:~> ./latency -t2 -p200 -f
>
> root:~> cat /proc/ipipe/trace/frozen
> I-pipe frozen back-tracing service on 2.6.16.27-ADI-2006R2/ipipe-1.5-00
> ------------------------------------------------------------
> Freeze: 144879102832 cycles, Trace Points: 200 (+50)
>
> +----- Hard IRQs ('|': locked)
> |+---- <unused>
> ||+--- <unused>
> |||+-- Xenomai
> ||||+- Linux ('*': domain stalled, '+': current, '#': current+stalled)
> ||||| +---------- Delay flag ('+': > 1 us, '!': > 10
> us)
> ||||| | +- NMI noise ('N')
> ||||| | |
> Type User Val. Time Delay Function (Parent)
...
> : +func -98+ 1.550 _kthread_should_stop+0x10
> (_gatekeeper_thread+0xec)
> :| +begin 0x80000001 -97+ 2.898 _gatekeeper_thread+0x100
> (__start+0xfffff000)
> :| *+func -94+ 1.432 _xnpod_resume_thread+0xa
> (_gatekeeper_thread+0x186)
> :| *+[ 51] -<?>- 0 -92+ 5.450 _xnpod_resume_thread+0x4a
> (__start+0xfffff000)
Here we go: harden the xeno thread related to pid 51 (likely some now
gone part of latency).
http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/pod.c#L1555
> :| *+func -87+ 1.100 _xnpod_schedule_runnable+0xc
> (_gatekeeper_thread+0x1be)
> :| *+[ 0] swapper -1 -86+ 2.788 _xnpod_schedule_runnable+0x28
> (__start+0xfffff000)
This correlates to
http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/pod.c#L2529
xnpod_schedule_runnable returns without actually switching (off-topic:
seems to be the only use-case for this function right now...).
> :| *+func -83+ 2.852 _xnpod_schedule+0xa
> (_gatekeeper_thread+0x1ce)
> :| *+func -80+ 2.970 _ipipe_trigger_irq+0x12
> (_xnpod_schedule+0x2e6)
> :| *+func -77+ 2.120 ___ipipe_handle_irq+0x1e
> (_ipipe_trigger_irq+0x86)
> :| *+func -75+ 2.058 ___ipipe_dispatch_wired+0x14
> (___ipipe_handle_irq+0x190)
> :| *+func -73+ 1.722 ___ipipe_restore_pipeline_head+0xa
> (_gatekeeper_thread+0x138)
> :| +func -71+ 2.130 ___ipipe_walk_pipeline+0x18
> (___ipipe_restore_pipeline_head+0xd0)
> :| + func -69+ 2.410 _ipipe_suspend_domain+0xa
> (___ipipe_walk_pipeline+0xac)
> :| + func -67+ 5.048 ___ipipe_sync_stage+0xe
> (_ipipe_suspend_domain+0x4e)
> :| # func -62+ 2.452 _xnpod_schedule_handler+0xe
> (___ipipe_sync_stage+0x186)
> :| # func -59+ 2.130 _xnpod_schedule+0xa
> (_xnpod_schedule_handler+0x5c)
> :| # [ 0] swapper -1 -57+ 7.180 _xnpod_schedule+0x104
> (__start+0xfffff000)
This must be
http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/pod.c#L2337
saying "root thread is about the be scheduled away". But then I'm
missing code here that one would normally see on x86:
http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/pod.c#L2442
Likely inlined on bfin, but this *must* make some noise:
http://www.rts.uni-hannover.de/xenomai/lxr/source/ksrc/nucleus/pod.c#L2453
Instead we enter the pending timer interrupt. This currently makes no
sense to me, rather make me think that something is skewed. Maybe only
the tracer, maybe more...
> :| # end 0x80000000 -50+ 2.278 ___sched_text_start+0x5ca
> (__start+0xfffff000)
> :| # func -48 0.922 ___ipipe_grab_irq+0x22
> (__common_int_entry+0x72)
> :| # begin 0x00000006 -47+ 1.820 ___ipipe_grab_irq+0x34
> (__start+0xfffff000)
> :| # func -45+ 1.530 ___ipipe_handle_irq+0x1e
> (___ipipe_grab_irq+0x3c)
> :| # func -43+ 1.168 _rthal_timer_ack+0x8
> (___ipipe_handle_irq+0x178)
> :| # func -42+ 1.102 ___ipipe_dispatch_wired+0x14
> (___ipipe_handle_irq+0x190)
> :| # end 0x00000006 -41+ 2.010 ___ipipe_grab_irq+0x42
> (__start+0xfffff000)
> : # func -39+ 1.090 _ipipe_unstall_pipeline_head+0x8
> (_xnshadow_harden+0xce)
> :| # begin 0x80000001 -38+ 2.308 _ipipe_unstall_pipeline_head+0x18
> (__start+0xfffff000)
That's another weirdness, but now very likely of the tracer: both
tracepoints should contain the same parent entry, but the second one is
just nonsense. And there are such kind of parents all over the place -
hmm, actually any explicit trace point shows this problem. Is there no
way on bfin to get the caller from everywhere in a function? I guess
there is, likely a register, right?
Jan
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Adeos-main mailing list [email protected] https://mail.gna.org/listinfo/adeos-main
