Am 16.07.2015 um 08:30 schrieb Gilles Chanteperdrix:
On Wed, Jul 15, 2015 at 11:00:55PM +0200, Johann Obermayr wrote:
Am 14.07.2015 um 02:02 schrieb Gilles Chanteperdrix:
On Tue, Jul 14, 2015 at 01:30:59AM +0200, Johann Obermayr wrote:
Am 14.07.2015 um 00:39 schrieb Gilles Chanteperdrix:
You mean there is an issue with bus contention? Are there many
things occupying the bus when the problem happens?
Hi,
both cpu core can access our pci-card.
at this time we only have one pci-card with an fpga(with dpram) and a sram.
the fpga generate our system tick. And have a company internal bus system.
this bus system can configured with the dpram & fpga-config register.
we also must read witch irq come from the fpga, because internal he can have
more than one irq sources. and we must quit this irq. (with reading a fpga
register)
we see that if one core accessing the pci-bus, the other core have a high
latency.
core0 = importan & high prior accessing fpga and sram
core1 = linux & visu & some other low prior
if a low prior copy a memblock to/from the sram, the core0 have high latency
in
our irq handler at reading some data from fpga.
so we add a own pci-blocker task on core1. this task ist started about 50us
before
next tick-irq is coming. now our irq handle can access the fpga without
waiting.
Well, normally, a PCI bus controller has parameters controlling the
duration of the longest burst, adequately named "PCI latency". You
should also look at whether caching is enabled. A PCI bridge will
normally prefetch from a PCI bar (provided that the FPGA indicates
in the configuration bytes that the memory is prefetchable) to avoid
CPU wait states. Using what was once called MTRR, but has a new name
in new processor, you can cause the procesdor to buffer data before
sending large bursts on the PCI bus. I think what you need to have a
look at is the documentation of the PCIe to PCI bridge, to see if
you can not improve the situation by better configuring it. Also,
reading or writing to FPGA is RAM looks pretty strange, since I
would guess an FPGA could be master on the PCI bus and do DMA
itself, thereby relieving the CPU.
Hello,
how can i check, if the TSC on both core are run syncron ?
Dual Core Celeron, no Hyperthreading.
Quite frankly, I think it is pretty clear what the problem is now:
you are accessing your SRAM in uncached mode, which causes large
wait states in your processor and cause it to misbehave somehow. Try
to fix that, enable pre-fetching on the PCI bridge as well, I
believe everything else are red herrings.
Regards.
Hello,
this is our PCI-card
02:04.0 ACCESS Bus: Device 5112:2200
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium
>TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 32
Interrupt: pin A routed to IRQ 11
Region 0: Memory at f6800000 (32-bit, non-prefetchable) FPGA & SRAM
Region 1: Memory at f7000000 (32-bit, non-prefetchable) DPRAM
Kernel driver in use: Sigmatek PCI
SRAM is a part of the FPGA space. for FPGA we must use non-prefetchable.
i will talk , if there is a way to add a own region for SRAM, that is
prefetchable.
for the bridge chip, i does not found about prefetchable.
Why can sometime Core1 SRAM access not interrupted by the timer/schedule ?
_______________________________________________
Xenomai mailing list
[email protected]
http://xenomai.org/mailman/listinfo/xenomai