This is very dependent on your bandwidth test program. May be your miss
rate is low... may be your programs cannot benefit from memory level
parallelism... Use gem5 stats to dig in further. There are some programs
out there that can stress dram utilization. But writing a good one is not
that hard.
Are you using a prefetcher? For apps with regular access patterns a
prefetcher could increase dram bandwidth utilization significantly...

It makes sense that your write queue grows to 20, since a good dram
controller should not interleave reads and writes to improve dram access
efficiency. The gem5 dram controller buffers write requests up to a
threshold before triggering writes.
On tracking, you simply use a trace flag (I think it was DRAM) to trace
incoming read and write requests.

Thanks,
Amin

On Fri, Oct 3, 2014 at 2:58 PM, Prathap Kolakkampadath via gem5-users <
gem5-users@gem5.org> wrote:

> Hi Users,
>
> I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and
> L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2 = 30.According to my
> understanding, this will enable each core to generate 10 outstanding memory
> requests.
> I am running a bandwidth test on all cpu's, which is memory-intesnive and
> generates consequent read requests to DRAM.
>
> However when i captured the DRAM debug messages, i could see that the DRAM
> read queue size is varying only between 0-2(expected a queue fill)  and
> reads are scheduling immediately. Whereas the write queue size varies and
> goes above 20.
> Any guess on what's going wrong?
> I can use a CommMonitor to track incoming requests to DRAM but how can i
> track read/writes to DRAM ?
>
> Thanks,
> Prathap
>
> _______________________________________________
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to