Ahh, well, I had my read request segmentation switched off; so nothing over
512B was supposed to work!
Interesting we got more than 2x that (1664B) before we got errors....
03:00.0 RAM memory: Xilinx Corporation Unknown device 4243 (rev 02)
Subsystem: Xilinx Corporation Unknown device 0007
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 255
Region 0: Memory at f1000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at f2ff0000 (32-bit, non-prefetchable) [size=64K]
Expansion ROM at f2e00000 [disabled] [size=1M]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [48] Message Signalled Interrupts: 64bit+ Queue=0/0
Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [60] Express Endpoint IRQ 0
Device: Supported: MaxPayload 512 bytes, PhantFunc 1,
ExtTag+
Device: Latency L0s unlimited, L1 unlimited
Device: AtnBtn- AtnInd- PwrInd-
Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported-
Device: RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
Device: MaxPayload 128 bytes, MaxReadReq 512 bytes
Link: Supported Speed 2.5Gb/s, Width x8, ASPM L0s, Port 0
Link: Latency L0s unlimited, L1 unlimited
Link: ASPM Disabled RCB 64 bytes CommClk- ExtSynch-
Link: Speed 2.5Gb/s, Width x8
Capabilities: [100] Device Serial Number 35-0a-00-01-01-00-00-00
On Sat, Dec 26, 2009 at 3:01 PM, Shepard Siegel <
[email protected]> wrote:
> This drop fixes a defect seen on Wednesday where activeMessage ("pull") DMA
> would fail for transfers >64B.
> This is an interim fix that is tested to work for sizes up to and including
> 1536B.
>
> [s...@core960 ~jim]$ ./testDMA "m" "1 3 5" "0 1024 1536" "1 1000"
> passes
>
> There is an issue with read DMA at 1664B (works) 1668B (fails) for Read DMA
> I'm looking at
> I belive this is related to my work in progress on a the completion logic
> core (the logic that enforces PCIe clause 2.2.9)
>
> [s...@core960 ~jim]$ sudo -E ./jimdo -r1im -r3op -n 1 -i 1 -I 1664
> Starting: sending 1 messages of 1664 (buffer 2048)
> Buffer counts: 0o 1 1i 1 3o 1 4i 1 6o 1 0i 1
> Active indicators: 0o (null) 1i active 3o passive 4i (null) 6o (null) 0i
> (null)
> OCFRP: 0000:03:00.0, with bitstream birthday: Sat Dec 26 13:43:12 2009
> Port WMIin, a provider/consumer, has options 0x16, initial role
> ActiveMessage
> other has options 0xe, initial role NoRole
> after negotiation, port WMIin, a provider/consumer, has role
> ActiveMessage
> other has role ActiveFlowControl
> DMA Memory: 256M at 0x5f700000
> Port WMIout, a user/producer, has options 0x16, initial role Passive
> other has options 0xe, initial role NoRole
> after negotiation, port WMIout, a user/producer, has role Passive
> other has role ActiveOnly
> Successfully sent and received 1 messages
>
> [s...@core960 ~jim]$ sudo -E ./jimdo -r1im -r3op -n 1 -i 1 -I 1668
> Starting: sending 1 messages of 1668 (buffer 2048)
> Buffer counts: 0o 1 1i 1 3o 1 4i 1 6o 1 0i 1
> Active indicators: 0o (null) 1i active 3o passive 4i (null) 6o (null) 0i
> (null)
> OCFRP: 0000:03:00.0, with bitstream birthday: Sat Dec 26 13:43:12 2009
> Port WMIin, a provider/consumer, has options 0x16, initial role
> ActiveMessage
> other has options 0xe, initial role NoRole
> after negotiation, port WMIin, a provider/consumer, has role
> ActiveMessage
> other has role ActiveFlowControl
> DMA Memory: 256M at 0x5f700000
> Port WMIout, a user/producer, has options 0x16, initial role Passive
> other has options 0xe, initial role NoRole
> after negotiation, port WMIout, a user/producer, has role Passive
> other has role ActiveOnly
> (hangs)
>
>
> I have observed almost no congestion internally; and the attached trace PDF
> (of two DMAs running at the same time) show how under-utilized we are; while
> waiting for the host.
>
> -Shep
>
>
_______________________________________________
opencpi_dev mailing list
[email protected]
http://lists.opencpi.org/listinfo.cgi/opencpi_dev-opencpi.org