Hi,
While digging up coherenceCache code, I found suspicious access timing when
fetching modified data from L1 to other L2 caches.
My configuration is the followings, simple 2-level private, shared L3,
two-core architecture.
C1 C2
| |
L1 L1
| |
L2 L2
| |
SplitPhaseBus
|
L3
My cache access scenario is at first Core 1 write data at address A. As the
data is never cached, the data is load on L1 and L2 as MODIFIED state.
Then, Core 1 write new data at address A again. As the data's state on L1
is MODIFIED state, Core 1 access succeeds immediately. Now Core 2 tries to
read the same data, then Core 1's L2 snoops the read request from C2 and
forwards the data to L2 as SHARED state and sends update message to Core
1's L1.
In this case, the critical path of accessing the data is
Core 2 -> Core 2's L1 miss -> Core 2's L2 Miss -> Core 1's L2 snoop
hit.
However, the data cached in Core 1's L2 is stale and the up-to-date data is
on Core 1's L1. Thus the snoop message should be forwarded to the L1 and
the L2 responds the snoop request after receiving the L1's response, but
the access paths to the L1 is not on the critical path of the data access.
Is it reported problem of coherentCache or is there anything I
mis-understand in coherentCache design?
Thanks,
Hanhwi
_______________________________________________
http://www.marss86.org
Marss86-Devel mailing list
[email protected]
https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel