Re: [gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-16 Thread Andreas Hansson
Hi Prathap,

It sounds like something is going wrong in your write-switching algorithm. Have 
you verified that a read is actually showing up when you think it is?

If needed, is there any chance you could post the patch on RB, or include the 
changes in a mail?

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Thursday, 16 July 2015 00:36
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: [gem5-users] DRAMCtrl: Question on read/write draining while not using 
the write threshold.

Hello Users,

I have experimented by modifying the DRAM Controller write draining algorithm 
in such a way that, the DRAM Controller always process reads and switch to 
writes only when the read queue is empty; controller switch from writes to read 
immediately when a read arrives in the read queue.

With this modification, i ran a very memory intensive test on four cores 
simultaneously. Each miss generates a read(line-fill) and write(write back) to 
DRAM.

First, I brief what i am expecting: DRAM controller continue to process reads; 
meanwhile DRAM write queue fills up and eventually fills up the write buffers 
in the cache and therefore LLC locks up, therefore, no further reads and writes 
to the DRAM from the core.
At this point, DRAM controller process reads until the read queue is empty and 
switches to write and starts processing writes until a new read request 
arrives. Note that the LLC is blocked at this moment. Once a write is processed 
and corresponding write buffer of cache is cleared, a core can generate a new 
miss(which generates a line fill first). During this round trip time(as 
observed in my system 45ns and tBURST is 7.5ns), the DRAM controller can 
process almost 6 requests(45/7.5). After which it should switch to read.

However, from the gem5 statistics, I observe that the mean writes_per_turn 
around is 30 instead of ~6. I don't understand why this is the case? Can 
someone help me in understanding this behaviour?

Thanks,
Prathap


-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-16 Thread Prathap Kolakkampadath
Hello Andreas,

I kind of figured out what's going on.

With the modified DRAM controller switching mechanism, DRAM controller
Write Buffer and LLC write buffers become full because the DRAM controller
don't service writes as long as there are reads in the queue. Once the LLC
write buffer is full, LLC controller locks the cpu side port; as a result
,core cannot further generate any misses to LLC. At this point, the DRAM
controller continues to service reads from read queue; this could generate
evictions at the LLC. These evicted writes will get queued in the DRAM
controller port. Once the DRAM Read queue is empty, the controller switches
to write. After a write burst, and propagation delay, one write buffer gets
freed and now core can generate more read fills. However the read will
appear in the DRAM controller only after the queued writes at the port(due
to evictions) are serviced.

Do you think this hypothesis is correct?

Thanks,
Prathap

On Thu, Jul 16, 2015 at 11:44 AM, Prathap Kolakkampadath 
kvprat...@gmail.com wrote:

 Hello Andreas,


 Below are the changes:

 @@ -1295,7 +1295,8 @@

  // we have so many writes that we have to transition
  if (writeQueue.size()  writeHighThreshold) {
 -switch_to_writes = true;
 +if (readQueue.empty())
 +switch_to_writes = true;
  }
  }

 @@ -1332,7 +1333,7 @@
  if (writeQueue.empty() ||
  (writeQueue.size() + minWritesPerSwitch  writeLowThreshold 
   !drainManager) ||
 -(!readQueue.empty()  writesThisTime = minWritesPerSwitch))
 {
 +!readQueue.empty()) {
  // turn the bus back around for reads again
  busState = WRITE_TO_READ;

 Previously, i used some bank reservation schemes and not using all the
 banks. Now i re ran without any additional changes other than the above
 and still gets a *mean *writes_per_turn around of ~15.
 Once the cache is blocked due to write_buffers full; the core should be
 able to immediately send another request to DRAM as soon as on write buffer
 is freed.
 In my system this round trip time  is 45.5ns [ 24(L2 latency hit +miss) +
 4 ((L1 latency hit +miss) + 7.5 (tBUrst) + 10[Xbar response+request].
 Note that the static latencies are set to 0.

 I am trying to figure out the unexpected number of writes processed per
 switch.
 Also attached the gem5 statistics.

 Thanks,
 Prathap


 On Thu, Jul 16, 2015 at 6:06 AM, Andreas Hansson andreas.hans...@arm.com
 wrote:

  Hi Prathap,

  It sounds like something is going wrong in your write-switching
 algorithm. Have you verified that a read is actually showing up when you
 think it is?

  If needed, is there any chance you could post the patch on RB, or
 include the changes in a mail?

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap
 Kolakkampadath kvprat...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Thursday, 16 July 2015 00:36
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: [gem5-users] DRAMCtrl: Question on read/write draining while
 not using the write threshold.

 Hello Users,

  I have experimented by modifying the DRAM Controller write draining
 algorithm in such a way that, the DRAM Controller always process reads and
 switch to writes only when the read queue is empty; controller switch from
 writes to read immediately when a read arrives in the read queue.

  With this modification, i ran a very memory intensive test on four cores
 simultaneously. Each miss generates a read(line-fill) and write(write back)
 to DRAM.

  First, I brief what i am expecting: DRAM controller continue to process
 reads; meanwhile DRAM write queue fills up and eventually fills up the
 write buffers in the cache and therefore LLC locks up, therefore, no
 further reads and writes to the DRAM from the core.
 At this point, DRAM controller process reads until the read queue is
 empty and switches to write and starts processing writes until a new read
 request arrives. Note that the LLC is blocked at this moment. Once a write
 is processed and corresponding write buffer of cache is cleared, a core can
 generate a new miss(which generates a line fill first). During this round
 trip time(as observed in my system 45ns and tBURST is 7.5ns), the DRAM
 controller can process almost 6 requests(45/7.5). After which it should
 switch to read.

  However, from the gem5 statistics, I observe that the mean
 writes_per_turn around is 30 instead of ~6. I don't understand why this is
 the case? Can someone help me in understanding this behaviour?

  Thanks,
  Prathap


 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any 

[gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-15 Thread Prathap Kolakkampadath
Hello Users,

I have experimented by modifying the DRAM Controller write draining
algorithm in such a way that, the DRAM Controller always process reads and
switch to writes only when the read queue is empty; controller switch from
writes to read immediately when a read arrives in the read queue.

With this modification, i ran a very memory intensive test on four cores
simultaneously. Each miss generates a read(line-fill) and write(write back)
to DRAM.

First, I brief what i am expecting: DRAM controller continue to process
reads; meanwhile DRAM write queue fills up and eventually fills up the
write buffers in the cache and therefore LLC locks up, therefore, no
further reads and writes to the DRAM from the core.
At this point, DRAM controller process reads until the read queue is empty
and switches to write and starts processing writes until a new read request
arrives. Note that the LLC is blocked at this moment. Once a write is
processed and corresponding write buffer of cache is cleared, a core can
generate a new miss(which generates a line fill first). During this round
trip time(as observed in my system 45ns and tBURST is 7.5ns), the DRAM
controller can process almost 6 requests(45/7.5). After which it should
switch to read.

However, from the gem5 statistics, I observe that the mean writes_per_turn
around is 30 instead of ~6. I don't understand why this is the case? Can
someone help me in understanding this behaviour?

Thanks,
Prathap
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users