Hello,
I’m simulating a 4-core system connected to 1 DRAM bank. For this purpose, I 
set both the variables ‘ranks_per_channel’ and ‘banks_per_rank’ in the file 
src/mem/DRAMInterface.py to 1, and then recompile Gem5. Next, I run a 
single-thread test workload that has 95% cache miss rate and observe the DRAM 
addresses being accessed using the CommMonitor. The CommMonitor is connected 
between L2 and membus. Following is a sample of the output of the CommMonitor 
(after processing the trace file).  

...
35,u,442036224,1,3,4971232483
35,u,482271232,1,3,4971232488
35,u,352915456,1,3,4971232494
35,u,436842496,1,3,4971232499
35,u,281513984,1,3,4971232505
35,u,416444416,1,3,4971232510
35,u,356982784,1,3,4971232516
35,u,420560896,1,3,4971232521
35,u,485822464,1,3,4971232527
35,u,382685184,1,3,4971232532
35,u,420356096,1,3,4971232538
35,u,293416960,1,3,4971232543
35,u,446799872,1,3,4971232549
35,u,485634048,1,3,4971232554
35,u,337686528,1,3,4971232560
35,u,278323200,1,3,4971232565
35,u,519716864,1,3,4971232571
...

The last entry in each row is the cycle when the memory access was sent to 
DRAM. As you can see, a DRAM address is accessed every 5-6 cycle (5-6ns). Since 
there is one DRAM bank, each row activation should take tRC time, which in my 
case is 48ns (as set in the file src/mem/DRAMInterface.py). Why is the DRAM 
addresses being accessed faster (5-6ns), when the tRC parameter is 48ns.  
The total number of DRAM addresses accessed in 1s is also extremely high. Based 
on tRC value, there should be ~21 million DRAM access per second (1/48E-9). 
However, I see 50 million DRAM access per second in the CommMonitor output. Why 
is DRAM access rate so high?

Thanks,
Biresh 
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to