It could be something from your multiport cache changes... here's how it's supposed to work:
1. When the L2 responds to cpu0.dcache in satisfyCpuSideRequest() (in cache_impl.hh), it notices that it can give cpu0.dcache an owned copy (see the comment starting "we can give the requester an exclusive copy"), and sets the mem_inhibit line to indicate this (we're piggybacking that signal for this alternate purpose in this case). 2. cpu0.dcache will notice that this line was set *on the request* (not the response) via the call to myCache()->markInService(mshr, pkt) that occurs after a successful call to sendTiming() in MemSidePort::sendPacket(). This call ends up in MSHR::markInService() where pendingDirty should be set based on the condition (!pkt->sharedAsserted() && pkt->memInhibitAsserted()) being true. 3. When the request from cpu1.dcache comes across the bus, cpu0.dcache will snoop it before the L2 is accessed. There should be a hit on the MSHR, and in MSHR::handleSnoop() the isPendingDirty() section should be triggered which causes cpu0.dcache to assert the mem_inhibit line, which should prevent the L2 from responding. Steve On Mon, Nov 1, 2010 at 11:55 AM, Lesha Jolondz <[email protected]>wrote: > Hi Steve, > > I have just updated my M5 source but the issue still persists. > > I would like to remark that I see the issue only when I am working with > Multiported cache and don't see it with default L2 cache. Do you think it is > possible that the bug is at Multiported cache implementation? What part > (class) of the M5 behaves wrong at the scenario? > > I would happy to know what is the correct scenario and where should I look > for a bug. > > Thanks in advance, > Aleksei > > > > > On Mon, Nov 1, 2010 at 9:41 AM, Steve Reinhardt <[email protected]> wrote: > >> Yea, this looks like a bug... thanks for the nice detailed analysis. >> Clearly cpu0.dcache and l2 should not both be responding to the cpu1.dcache >> request. I think that's what the assertion is about; when the second >> response comes in to cpu1.dcache, it has no outstanding request to satisfy >> because the first response has already taken care of it. At least that's >> what appears to be going on. >> >> Are you using the latest version of the code in the m5 repo? If so, let >> me know... I believe this situation should be handled correctly there, but >> it's pretty involved, so I don't want to have to explain it if it's just a >> matter of you needing to update your code. But if you are seeing this in >> the latest version, I'll gladly explain how it's supposed to work if you are >> willing to try and track down where it's going wrong. >> >> Steve >> >> On Mon, Nov 1, 2010 at 9:03 AM, Lesha Jolondz < >> [email protected]> wrote: >> >>> Hi, >>> >>> I am trying to work and debug Multiported L2 cache. I think that I have >>> found a bug that Multiported cache reveals in the M5 current cache system. >>> >>> I use the default FS configuration with minor latency changes for 4 core >>> system and run StreamCluster PARSEC benchmark. >>> >>> Here is the scenario: >>> 1) Read miss happens at Core 0 D-Cache. D-Cache send request to L2: >>> 2385290520000: system.tol2bus: recvTiming: src 2 dst -1 ReadReq 0x1117b80 >>> 2) L2 schedules service and waits dispatch latency: >>> 2385290520000: global: Schedule a service for port 0 >>> 3) L2 dispatches the request and calls timingAccess(pkt) - L2 hit. >>> 2385290521000: global: Dispatching packet for addr 0x1117b80 >>> 2385290521000: system.l2: ReadReq 1117b80 hit >>> 4) Write miss happens at Core 1 D-Cache: >>> 2385290521500: system.cpu1.dcache: WriteReq 1117b80 miss >>> 5) Core 1 D-Cache sends request to L2 and snoops at Core 0 D-Cache: >>> 2385290523500: system.tol2bus: recvTiming: src 4 dst -1 ReadExReq >>> 0x1117b80 >>> 2385290523500: system.cpu0.dcache: Deferring snoop on in-service MSHR to >>> blk 1117b80 >>> 2385290523500: global: Schedule a service for port 0 >>> 6) L2 dispatches the request and calls timingAccess(pkt) - L2 hit. >>> 2385290525000: global: Dispatching packet for addr 0x1117b80 >>> 2385290525000: system.l2: ReadExReq 1117b80 hit >>> 7) L2 replies to Core 0 D-Cache and D-Cache processes the differed Core 1 >>> D-Cache request: >>> 2385290526000: system.tol2bus: recvTiming: src 0 dst 2 ReadResp 0x1117b80 >>> 2385290526000: system.cpu0.dcache: Handling response to 1117b80 >>> 2385290526000: system.cpu0.dcache: Block for addr 1117b80 being updated >>> in Cache >>> 2385290526000: system.cpu0.dcache: replacement: replacing 1111b80 with >>> 1117b80: clean >>> 2385290526000: system.cpu0.dcache: Block addr 1117b80 moving from state 0 >>> to 15 >>> 2385290526000: system.cpu0.dcache: processing deferred snoop... >>> 2385290526000: system.cpu0.dcache: snooped a ReadExReq request for addr >>> 1117b80, responding, new state is 0 >>> 8) Core 1 D-Cache receives Core 0 D-Cache reply: >>> 2385290527891: system.tol2bus: recvTiming: src 2 dst -1 ReadReq 0x1117b80 >>> BUSY >>> 2385290528307: system.tol2bus: Sending a retry to >>> system.cpu0.dcache-mem_side_port >>> 2385290528307: system.tol2bus: recvTiming: src 2 dst 4 ReadExResp >>> 0x1117b80 >>> 2385290528307: system.cpu1.dcache: Handling response to 1117b80 >>> 2385290528307: system.cpu1.dcache: Block for addr 1117b80 being updated >>> in Cache >>> 2385290528307: system.cpu1.dcache: replacement: replacing 3e063b80 with >>> 1117b80: writeback >>> 2385290528307: system.cpu1.dcache: Block addr 1117b80 moving from state 0 >>> to 15 >>> 9) Core 1 D-Cache receives L2 reply: >>> 2385290531226: system.tol2bus: recvTiming: src 0 dst 4 ReadExResp >>> 0x1117b80 >>> 2385290531226: system.cpu1.dcache: Handling response to 1117b80 >>> >>> and then an assertions happens: >>> m5.opt: build/ALPHA_FS/mem/cache/mshr.hh:248: MSHR::Target* >>> MSHR::getTarget() const: Assertion `hasTargets()' failed. >>> >>> What can be the reason for the assertion? and how can it be fixed? Is it >>> a bug? >>> >>> Regards, >>> Aleksei >>> >>> >>> >>> >>> _______________________________________________ >>> m5-users mailing list >>> [email protected] >>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users >>> >> >> >> _______________________________________________ >> m5-users mailing list >> [email protected] >> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users >> > > > _______________________________________________ > m5-users mailing list > [email protected] > http://m5sim.org/cgi-bin/mailman/listinfo/m5-users >
_______________________________________________ m5-users mailing list [email protected] http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
