I think the answer to this question is sort of complicated. If you look down
below, the delay variable is basically used to schedule either the cache hit
event delay cycles in the future or a cache miss event delay cycles in the
future.

The code you pointed out is just a small step in the life of a cache
request. Essentially at each point in the simulation, the simulator decides
which function to call and how far in the future. So a cache hit will
execute in x cycles, but a cache miss might go out to memory and take x+100
cycles to complete. This *total* latency to service a request is the key.
The pipeline will send a cache request out, wait for it to complete, and
then retire it.  This is really what determines the IPC. What you've pointed
out in the cacheController is one step along this long chain of events which
ends in the request being 'completed' and the instruction moving forward in
the pipeline.

Please correct me if I misunderstood your question.

On Fri, Apr 29, 2011 at 10:36 PM, zhenyu sun <[email protected]>wrote:

>  Hi  everyone,
>
> I am working on the CPU performance study under different cache hit
> latencies.
>
> In the cacheController.cpp:
>
> if(hit) {
> delay = cacheAccessLatency_;
>
>
>
> If the CPU encounter a stall caused by write access (dependency or buffer
> is full), how does the 'delay' influence the ipc.
> In other word, in which part of the code the delay is added on the
> sim_cycle?
>
> Thanks a lot.
>
> Zhenyu Sun
>
>
>
> _______________________________________________
> http://www.marss86.org
> Marss86-Devel mailing list
> [email protected]
> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
>
>
_______________________________________________
http://www.marss86.org
Marss86-Devel mailing list
[email protected]
https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel

Reply via email to