Hi. I am looking for a way to model and assess the impact of having one vs. two 
ports in the L1 data cache (e.g., cache serving two loads in the same cycle).

I found two variables "*cacheLoadPorts*" and "*cacheStorePorts*", which are 
both set to 200 (surprinsingly) by default. There was some discussion about 
this in 2016: https://www.mail-archive.com/gem5-users@gem5.org/msg12864.html

Which, if I'm not mistaken, resulted in the following patch: 
https://github.com/gem5/gem5/commit/e5fb6752d613a6f85e2f93b4c01836ac59a8c90c

In the patch, I found the following comment (in the *src/cpu/o3/lsq_unit.hh* 
file, currently at *src/cpu/o3/lsq_unit.hh* in up-to-date versions)

// For now, load throughput is constrained by the number of\
// load FUs only, and loads do not consume a cache port (only\
// stores do).\
// @todo We should account for cache port contention\
// and arbitrate between loads and stores.

I'm still unsure why things are modeled this way, cause 200 ports sounds pretty 
unrealistic, BUT since this is how it is... I have some questions:

\- If throughput is limited by the number of load/store units, then modeling 
two ports in the data cache should be done by setting the number of Load Units 
from 1 to 2? Is that reasonable?

\- Otherwise, should I set the number of ports to 1 or 2 (instead of the 
current 200) and that's the correct approach? (Curiosity: if this is the 
answer, why is there a 200 there first of all?)

Thank you very much and kind regards,

Pedro.
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org

Reply via email to