Hi Jason,

Thank you for your explanation. I want to know is it possible to set the
maximum number of packets which can be accepted by cache per cycle?
In /src/cpu/o3/O3CPU.py, I found there was a parameter which is named
cacheStorePorts, can I change this parameter to set the number of cache
ports for higher concurrency?

Thank you very much.

Best,
Rosen



Jason Lowe-Power <[email protected]> 于2020年1月3日周五 上午9:29写道:

> Hi Rosen,
>
> All of the caches in gem5 are pipelined by default. This goes for both the
> classic caches and Ruby. gem5's memory system is implemented by using
> "ports" that you can send "packets" over. All of the caches accept at least
> one packet per cycle (with some caveats depending on conflicting addresses).
>
> Cheers,
> Jason
>
> On Tue, Dec 24, 2019 at 5:56 AM Rosen Lu <[email protected]> wrote:
>
>> Hello,
>>
>> I have a question regarding the pipelined cache. The pipelined cache
>> architecture can be accessed every clock cycle and thereby, enhances
>> bandwidth and overall processor performance. Pipelining divides the cache
>> latency into multiple stages so that multiple accesses can be made
>> simultaneously.
>>
>> By looking at the codes for gem5, I realized that O3 does not seem to
>> take advantage of pipelined cache, I want to know, does gem5
>> support pipelined cache?
>>
>> Any ideas or suggestions would be helpful.
>>
>> Thank you very much.
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to