Hi all, A monolithic (single bank) single-port cache on a real hardware should ideally block the cache for a request whenever another request is being served in the cache. By "request", I mean both the request from any core to that cache, and the response of that request (responses will cause miss-fills, hence write operations) coming from lower-level cache or main memory. Due to this blocking, queuing delays will arise for the memory requests.
Wanted to ask whether the recent version of gem5 implements this blocking mechanism and queuing delay or not. If it models, can anyone please put some light on how is it modelled? Thanks and regards, Aritra
_______________________________________________ gem5-users mailing list -- gem5-users@gem5.org To unsubscribe send an email to gem5-users-le...@gem5.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s