Hi Users,

Thanks Amin.

You said it right. The miss rate of my benchmark was low. I have modified
the benchmark so that every read request is a cache miss. After this i
could see improvements and DRAM queue is getting filled. However when I
print the queue size from reorderQueue(),  I observe that the queue's are
filled again only after the current queue size decrements to zero. As an
example, initially the queue size is 9 (L1 MSHR = 10)  and it decrements to
8,7......2. and again the queue size becomes 9 after only after the queue
becomes almost empty. Is it due to the latency between when it is read from
DRAM and respective MSHR is cleared ?


Regards,
Prathap



On Fri, Oct 3, 2014 at 3:58 PM, Prathap Kolakkampadath <kvprat...@gmail.com>
wrote:

> Hi Users,
>
> I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and
> L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2 = 30.According to my
> understanding, this will enable each core to generate 10 outstanding memory
> requests.
> I am running a bandwidth test on all cpu's, which is memory-intesnive and
> generates consequent read requests to DRAM.
>
> However when i captured the DRAM debug messages, i could see that the DRAM
> read queue size is varying only between 0-2(expected a queue fill)  and
> reads are scheduling immediately. Whereas the write queue size varies and
> goes above 20.
> Any guess on what's going wrong?
> I can use a CommMonitor to track incoming requests to DRAM but how can i
> track read/writes to DRAM ?
>
> Thanks,
> Prathap
>
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to