Hello Users,

I have experimented by modifying the DRAM Controller write draining
algorithm in such a way that, the DRAM Controller always process reads and
switch to writes only when the read queue is empty; controller switch from
writes to read immediately when a read arrives in the read queue.

With this modification, i ran a very memory intensive test on four cores
simultaneously. Each miss generates a read(line-fill) and write(write back)
to DRAM.

First, I brief what i am expecting: DRAM controller continue to process
reads; meanwhile DRAM write queue fills up and eventually fills up the
write buffers in the cache and therefore LLC locks up, therefore, no
further reads and writes to the DRAM from the core.
At this point, DRAM controller process reads until the read queue is empty
and switches to write and starts processing writes until a new read request
arrives. Note that the LLC is blocked at this moment. Once a write is
processed and corresponding write buffer of cache is cleared, a core can
generate a new miss(which generates a line fill first). During this round
trip time(as observed in my system 45ns and tBURST is 7.5ns), the DRAM
controller can process almost 6 requests(45/7.5). After which it should
switch to read.

However, from the gem5 statistics, I observe that the mean writes_per_turn
around is 30 instead of ~6. I don't understand why this is the case? Can
someone help me in understanding this behaviour?

Thanks,
Prathap
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to