Hey there folks,

I am trying to add an L2 between the directory and DRAM in a (otherwise flat) 
SLICC protocol I've been working on but have been running into some issues. I 
know that some of the example protocols in src/mem/ruby/protocol/ do have 
co-located L3s alongside the directories, but to simplify the state machine of 
the directory I would really like to keep the controllers separate. As far as 
I've been able to find, there aren't any example protocols which connect a 
cache directly to the memory. If anyone has worked on similar things and has 
pointers to examples I could have a look at, I would be very grateful!

Some more thorough information about what I'm trying to do and what I've tried 
so far:

-- Background --
The system I am hoping to create is a mesh of nodes, where each node has a CPU, 
a private L1, and a pair consisting of a directory and an L2 that is 
responsible for a subset of the address space. So, if a CPU makes a request 
that cannot be satisfied by its L1, it sends a message to a directory using the 
mapAddressToMachine function. Depending on the address of the request, the 
message will be routed through the mesh to the corresponding directory. So far 
so good: setting this up has been easy thanks to the Mesh_XY topology and the 
setup_memory_controllers() function in configs/Ruby/Ruby.py.

My woes come from the fact instead of connecting the directory to memory, I now 
want the directory to send its main-memory requests to an L2 instead, and to 
then have that L2 connected to main memory. The L2 has a simple Valid/Invalid 
state design, since it effectively just serves as a DRAM cache (i.e. the 
directory is responsible for upholding SWMR). Unlike a DRAM cache, however, I 
want the L2 to be collocated with the directory, so if the directory at node 
/n/ makes a request using mapAddressToMachine(..., MachineType:L2Cache) then 
the target L2 will also be on node /n/.

-- Approach --
To set this up, I have taken some of the code from setup_memory_controllers() 
and modified it so that the memory controllers are connected to the L2s and so 
that the L2s and the directories have the same addr_ranges. I have tried both 
manually generating the addr_ranges using the m5.objects.AddrRange() 
constructor and by setting them equal to the addr_ranges of the constructed 
DRAM controllers (without success). This can be seen in my config file for the 
protocol: 
https://gist.githubusercontent.com/theoxo/56d35e7a38a01155029748199c1ac7c9/raw/fe031542188ecfbfc41a791b91756d975777dae9/gistfile1.txt

-- Problem --
Unfortunately, this doesn't really seem to work. I've been testing in SE mode 
with the "threads" test program and while it does successfully run for some 
time, I eventually encounter the following error:

> panic: Tried to read unmapped address 0.
> PC: 0x7ffff8000090, Instr:   ADD_M_R : ldst   t1b, DS:[rax]

As far as I understand, this means that my attempt at setting up the 
addr_ranges is failing? My understanding of gem5 internals is unfortunately 
quite shallow, so I am struggling to decode more than that from the error 
message.

Sorry about the long email – if anyone recognizes any of these issues from 
similar systems you've configured yourself or know of any pointers to example 
protocols that are at all similar, please do let me know! 

Best,
Theo Olausson
Univ. of Edinburgh
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to