Re: [gem5-users] Tracking DRAM read/write requests
Hi Users, Thanks Amin. You said it right. The miss rate of my benchmark was low. I have modified the benchmark so that every read request is a cache miss. After this i could see improvements and DRAM queue is getting filled. However when I print the queue size from reorderQueue(), I observe that the queue's are filled again only after the current queue size decrements to zero. As an example, initially the queue size is 9 (L1 MSHR = 10) and it decrements to 8,7..2. and again the queue size becomes 9 after only after the queue becomes almost empty. Is it due to the latency between when it is read from DRAM and respective MSHR is cleared ? Regards, Prathap On Fri, Oct 3, 2014 at 3:58 PM, Prathap Kolakkampadath wrote: > Hi Users, > > I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and > L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2 = 30.According to my > understanding, this will enable each core to generate 10 outstanding memory > requests. > I am running a bandwidth test on all cpu's, which is memory-intesnive and > generates consequent read requests to DRAM. > > However when i captured the DRAM debug messages, i could see that the DRAM > read queue size is varying only between 0-2(expected a queue fill) and > reads are scheduling immediately. Whereas the write queue size varies and > goes above 20. > Any guess on what's going wrong? > I can use a CommMonitor to track incoming requests to DRAM but how can i > track read/writes to DRAM ? > > Thanks, > Prathap > ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Tracking DRAM read/write requests
This is very dependent on your bandwidth test program. May be your miss rate is low... may be your programs cannot benefit from memory level parallelism... Use gem5 stats to dig in further. There are some programs out there that can stress dram utilization. But writing a good one is not that hard. Are you using a prefetcher? For apps with regular access patterns a prefetcher could increase dram bandwidth utilization significantly... It makes sense that your write queue grows to 20, since a good dram controller should not interleave reads and writes to improve dram access efficiency. The gem5 dram controller buffers write requests up to a threshold before triggering writes. On tracking, you simply use a trace flag (I think it was DRAM) to trace incoming read and write requests. Thanks, Amin On Fri, Oct 3, 2014 at 2:58 PM, Prathap Kolakkampadath via gem5-users < gem5-users@gem5.org> wrote: > Hi Users, > > I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and > L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2 = 30.According to my > understanding, this will enable each core to generate 10 outstanding memory > requests. > I am running a bandwidth test on all cpu's, which is memory-intesnive and > generates consequent read requests to DRAM. > > However when i captured the DRAM debug messages, i could see that the DRAM > read queue size is varying only between 0-2(expected a queue fill) and > reads are scheduling immediately. Whereas the write queue size varies and > goes above 20. > Any guess on what's going wrong? > I can use a CommMonitor to track incoming requests to DRAM but how can i > track read/writes to DRAM ? > > Thanks, > Prathap > > ___ > gem5-users mailing list > gem5-users@gem5.org > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Tracking DRAM read/write requests
Hi Users, I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2 = 30.According to my understanding, this will enable each core to generate 10 outstanding memory requests. I am running a bandwidth test on all cpu's, which is memory-intesnive and generates consequent read requests to DRAM. However when i captured the DRAM debug messages, i could see that the DRAM read queue size is varying only between 0-2(expected a queue fill) and reads are scheduling immediately. Whereas the write queue size varies and goes above 20. Any guess on what's going wrong? I can use a CommMonitor to track incoming requests to DRAM but how can i track read/writes to DRAM ? Thanks, Prathap ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users