> On Feb. 10, 2015, 9:37 a.m., Stephan Diestelhorst wrote:
> > I have had a similar impulse, when inspecting this code.  However, the 
> > prefetch hitting a write-back in an upper cache is actually already handled 
> > in Cache<TagStore>::getTimingPacket():
> > 
> >             // Check if the prefetch hit a writeback in an upper cache
> >             // and if so we will eventually get a HardPFResp from
> >             // above
> >             if (snoop_pkt.memInhibitAsserted()) {
> >                 // If we are getting a non-shared response it is dirty
> >                 bool pending_dirty_resp = !snoop_pkt.sharedAsserted();
> >                 markInService(mshr, pending_dirty_resp);
> >                 DPRINTF(Cache, "Upward snoop of prefetch for addr"
> >                         " %#x (%s) hit\n",
> >                         tgt_pkt->getAddr(), tgt_pkt->isSecure()? "s": "ns");
> >                 return NULL;
> >             }
> > 
> > We are currently testing a patch that shuffles this thing upwards; we have 
> > more detail tomorrow.  In either way, if we want to go with this patch, it 
> > should address this seemingly dead bit of logic, as well.  My suggestion is 
> > to give this an additional day, and in the meantime test as Andreas has 
> > suggested.
> 
> Steve Reinhardt wrote:
>     Good catch, I hadn't noticed that before.  I believe what's happening in 
> George's case is that there are multiple L1s above a shared L2, and one of 
> them has/had the block in O state (dirty shared) and is in the process of 
> writing it back, while others have the block in the shared state.  So the one 
> with the block in writeback goes to write back the data (in accordance with 
> the code snippet you have there), but meanwhile other caches have squashed 
> the prefetch, so your code never gets executed, because this code immediately 
> above it gets executed instead:
>     
>                 // Check to see if the prefetch was squashed by an upper
>                 // cache (to prevent us from grabbing the line) or if a
>                 // writeback arrived between the time the prefetch was
>                 // placed in the MSHRs and when it was selected to be sent.
>                 if (snoop_pkt.prefetchSquashed() || blk != NULL) {
>                     DPRINTF(Cache, "Prefetch squashed by cache.  "
>                                    "Deallocating mshr target %#x.\n", 
> mshr->addr);
>                     [...]
>                     return NULL;
>                 }
>     
>     Interestingly the point where the prefetches get squashed is here, in 
> handleSnoop():
>     
>         // Invalidate any prefetch's from below that would strip write 
> permissions
>         // MemCmd::HardPFReq is only observed by upstream caches.  After 
> missing
>         // above and in it's own cache, a new MemCmd::ReadReq is created that
>         // downstream caches observe.
>         if (pkt->cmd == MemCmd::HardPFReq) {
>             DPRINTF(Cache, "Squashing prefetch from lower cache %#x\n",
>                     pkt->getAddr());
>             pkt->setPrefetchSquashed();
>             return;
>         }
>     
>     where despite the language about "strip write permissions", the prefetch 
> appears to get squashed as long as the block is valid, regardless of the 
> state.

So if nothing else, my commit message describes the bug incorrectly... it's not 
just a matter of hitting in the write buffer, it's handling the case where it 
*both* hits in the write buffer of one upper-level cache, and also gets 
squashed because of a hit in another upper-level cache.  The actual symptom we 
were seeing was that the response from the cache with the write-buffer copy was 
causing an assertion, since the receiving cache wasn't expecting a response 
because it had squashed the prefetch.


- Steve


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2636/#review5885
-----------------------------------------------------------


On Feb. 5, 2015, 4:38 p.m., Steve Reinhardt wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/2636/
> -----------------------------------------------------------
> 
> (Updated Feb. 5, 2015, 4:38 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> -------
> 
> Changeset 10683:3147f3a868f7
> ---------------------------
> mem: fix prefetcher bug regarding write buffer hits
> 
> Prefetches are supposed to be squashed if the block is already
> present in a higher-level cache.  We squash appropriately if
> the block is in a higher-level cache or MSHR, but did not
> properly handle the case where the block is in the write buffer.
> 
> Thanks to George Michelogiannakis <mihe...@lbl.gov> for
> help in tracking down and testing this fix.
> 
> 
> Diffs
> -----
> 
>   src/mem/cache/cache_impl.hh 3d17366c0423a59478ae63d40c8feeea34df218a 
> 
> Diff: http://reviews.gem5.org/r/2636/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Steve Reinhardt
> 
>

_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to