Have you tried a recent version of the code?  This changeset I pushed
last week eliminates some (but probably not all) of the unusual
coherence behavior:
http://repo.m5sim.org/m5/rev/aa8fd8f6a495

Also, are you running a uniprocessor or multiprocessor workload?

Steve


On Tue, Sep 14, 2010 at 1:59 AM,  <[email protected]> wrote:
> Hi,
>
> I am currently using M5 to simulate LLC's behavior, however I discovery
> some strange statistics of M5's L2 cache.
>
> The problem is like this: I learn that in the stat file,
>
> system.l2.demand_accesses = system.l2.ReadExReq_accesses +
>                              system.l2.ReadReq_accesses;
>
> However, system.l2.ReadExReq_miss_rate is always 100% in all the
> simulation. This make L2's damand miss rate is always very high, which
> won't happen in the real system.
>
>
> I checked the source code, and had some of my explanation of this problem:
>
> 1> ReadExReq always cause invalidateBlk of the L2 cache, this is why there
> always is a miss for every ReadExReq access.
> 2> ReadExReq always derives from the UpgradeReq, which is happen when a
> L1-cache's block state change from S to E. So actually, ReadExReq is not a
> really a read request. M5 just uses ReadExReq's side effect to invalidate
> L2 cache's copy.
>
> Thus, I suppose it is safe to ignore ReadExReq. And just focus on
> ReadReq's statistics for LLC replacement policy study.
>
> However, I am not fully understand the M5's memory system. So I am not
> sure whether my explanation is correct. Can anyone give some comment on
> the problem and my explanation? Thanks!
>
> Best.
>
>                                                           Lunkai Zhang
>                                                Chinese Academy of Sciences
>
>
>
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to