Hi All,
I found the following fact by tracing the simulation:
I-L1 miss: L2 hit: get back to L1 [Seems normal...]
L2 miss: fetch from memory, get back to L1 [Seems
normal...]
D-L1 read miss: L2 hit: get back to L1 [Seems normal...]
L2 miss: fetch from memory, get back to L1
[Seems normal...]
D-L1 write miss: L2 hit: *ALWAYS "Not Writable"*, forced to become a
miss, fetch from memory, then invalidate L2 block
L2 miss: fetch from mem, then invalidate
L2 block
D-L1 writeback: L2 hit:
L2 miss: block either in L2 but *NOT
valid* (most common) or not in L2
----------------------------------------------------------------------------------------
1. My confusion is that, why it is designed in the way such that
whenever there is a D-L1 cache write miss to L2, even if the
corresponding block is in L2 and is valid, it is always not writable
which forces a L2 miss to fetch from memory. (I suspect the block is set
to not writable in Cache<TagStore>::satisfyCpuSideRequest when
pkt->isRead() and !pkt->needsExclusive()). Even the initial read is from
the same CPU (or even when there is only one CPU), the L2 block is made
un-writable.
2. Another observation is that, whenever there is a writeback from D-L1
cache, it is always a L2 miss (block is in L2 but not valid). I found
that if D-L1 write hits in D-L1 (L1 hit), the corresponding L2 block is
untouched (not invalidate). Then why it becomes invalid when writeback
happens???
Is my understanding correct?
Thanks.
--
Best Regards,
Wang, Weixun
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users