[EMAIL PROTECTED] wrote: > Zitat von Philippe Gerum <[EMAIL PROTECTED]>: > >> The risk in ironing those PCI locks is to run with hw interrupts >> disabled for a >> long time, inducing pathological latencies, so running RTAI's latency test in >> the background should help detecting those peaks. >> >> However, we may find nothing bad if the kernel uses the MMConfig >> access method >> to the PCI space since this is basically fast mmio there. But since >> you seem to >> be running on x86_32, we may want to check whether BIOS or direct >> access to the >> PCI config does not raise the typical latency too much, as well (I'm >> unsure that >> PCI_GOBIOS will give us decent results though). >> >> To sum up: with different settings for the PCI config access method in "Bus >> options" (by order of criticality, MMConfig then Direct, then maybe >> BIOS), does >> the latency tool report pathological peaks? >> > > Hi Philippe > > I played with the different PCI configurations and the results are > devastating. Latencies (and jitter) skyrocket after some minutes of > testing and peak at several milliseconds. I didn't do the regression > with 'normal' INTs though, but that's something up next. Additionally > MMCONFIG produced some strange msg at boot.
Could you catch some path traces with the ipipe latency tracer when this happens? See [1] for details. Dunno if RTAI's latency tool is prepared to support you here (triggering a trace on new peaks), but CONFIG_IPIPE_TRACE_IRQSOFF should already suffice in this case - if the latency is due to IRQ disabling over PCI code. Thanks, Jan [1] http://www.xenomai.org/index.php/I-pipe:Tracer -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux _______________________________________________ Adeos-main mailing list [email protected] https://mail.gna.org/listinfo/adeos-main
