Hi Wendy,
I’m using the Gem5’s default open-adaptive page policy and the addresses belong 
to different rows. I also made sure that the same row is not being accessed 
consecutively. Following is the code I use to pick random address:

const size_t mem_size = 1 << 28;
char *g_mem;
char *pick_addr() {
 size_t offset = (rand() << 12) % mem_size;
 return g_mem + offset;
}

Thanks,
Biresh

> On May 6, 2021, at 12:47 PM, Wendy Elsasser <wendy.elsas...@arm.com> wrote:
> 
> Hi Biresh,
> What is the page policy and what is the distribution across rows for your 
> access pattern? For example, are these random addresses that should access 
> different rows or is this sequential, in which case, the data will sequence 
> across the column addresses within the same row.
> 
> Thanks,
> Wendy
> 
> On 5/6/21, 11:37 AM, "Joardar, Biresh Kumar via gem5-users" 
> <gem5-users@gem5.org> wrote:
> 
>    Hello,
>    I’m simulating a 4-core system connected to 1 DRAM bank. For this purpose, 
> I set both the variables ‘ranks_per_channel’ and ‘banks_per_rank’ in the file 
> src/mem/DRAMInterface.py to 1, and then recompile Gem5. Next, I run a 
> single-thread test workload that has 95% cache miss rate and observe the DRAM 
> addresses being accessed using the CommMonitor. The CommMonitor is connected 
> between L2 and membus. Following is a sample of the output of the CommMonitor 
> (after processing the trace file).
> 
>    ...
>    35,u,442036224,1,3,4971232483
>    35,u,482271232,1,3,4971232488
>    35,u,352915456,1,3,4971232494
>    35,u,436842496,1,3,4971232499
>    35,u,281513984,1,3,4971232505
>    35,u,416444416,1,3,4971232510
>    35,u,356982784,1,3,4971232516
>    35,u,420560896,1,3,4971232521
>    35,u,485822464,1,3,4971232527
>    35,u,382685184,1,3,4971232532
>    35,u,420356096,1,3,4971232538
>    35,u,293416960,1,3,4971232543
>    35,u,446799872,1,3,4971232549
>    35,u,485634048,1,3,4971232554
>    35,u,337686528,1,3,4971232560
>    35,u,278323200,1,3,4971232565
>    35,u,519716864,1,3,4971232571
>    ...
> 
>    The last entry in each row is the cycle when the memory access was sent to 
> DRAM. As you can see, a DRAM address is accessed every 5-6 cycle (5-6ns). 
> Since there is one DRAM bank, each row activation should take tRC time, which 
> in my case is 48ns (as set in the file src/mem/DRAMInterface.py). Why is the 
> DRAM addresses being accessed faster (5-6ns), when the tRC parameter is 48ns.
>    The total number of DRAM addresses accessed in 1s is also extremely high. 
> Based on tRC value, there should be ~21 million DRAM access per second 
> (1/48E-9). However, I see 50 million DRAM access per second in the 
> CommMonitor output. Why is DRAM access rate so high?
> 
>    Thanks,
>    Biresh
>    _______________________________________________
>    gem5-users mailing list -- gem5-users@gem5.org
>    To unsubscribe send an email to gem5-users-le...@gem5.org
>    %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
> 
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.

_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to