Re: [gem5-users] MSHR Queue Full Handling

2015-07-20 Thread Davesh Shingari
Hi Andreas

Thanks a lot for pointer. I have been experimenting with your suggestion. I
have few questions.

*Question 1:*
In configs/common/Caches.py. configuration is done as follows:

class L1Cache(BaseCache):
assoc = 2
hit_latency = 2
response_latency = 2
mshrs = 4
tgts_per_mshr = 20
is_top_level = True

But when I print the values of variables used in isFull function (in
src/mem/cache/mshr_queue.hh) , I see following

16338315506500: system.cpu0.icache: Packet: MasterId=cpu0.inst
16338315506500: global: Number of allocated 1
16338315506500: global: Number of numEntries 8
16338315506500: global: Number of numReserve 4
16338316046500: system.cpu0.dcache: Packet: MasterId=switch_cpus0.data
16338316046500: global: Number of allocated 1
16338316046500: global: Number of numEntries 9
16338316046500: global: Number of numReserve 4

How come these values are different from those values? Am I looking at
wrong place?

*Question 2:*
As per your suggestion, I am trying to configure the MSHR per core in
configuration file itself. The  fs.py import from CacheConfig.py:

for i in xrange(options.num_cpus):
if options.caches:
print i
icache = icache_class(size=options.l1i_size,
  assoc=options.l1i_assoc,
mshrs=5)
dcache = dcache_class(size=options.l1d_size,
  assoc=options.l1d_assoc)

So configuring mshrs over here will overwrite other values. Is this the
correct place or were you referring to some other place?
ᐧ

On Thu, Jul 16, 2015 at 4:03 AM, Andreas Hansson andreas.hans...@arm.com
wrote:

  Hi all,

  The best way to customise your L1 instances is in the config script
 itself. If you use fs.py, I’d suggest to do it there.

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap
 Kolakkampadath kvprat...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Thursday, 16 July 2015 00:00
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: Re: [gem5-users] MSHR Queue Full Handling

Hello Davesh,

  I think it should be possible by passing the desired L1  MSHR setting for
 each core, while instantiating the dcache in CacheConfig.py
  Also look at the BaseCache constructor, to see how these parameters are
 being set.


  Thanks,
  Prathap

 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users




-- 
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

davesh.shing...@asu.edu
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] MSHR Queue Full Handling

2015-07-16 Thread Andreas Hansson
Hi all,

The best way to customise your L1 instances is in the config script itself. If 
you use fs.py, I’d suggest to do it there.

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Thursday, 16 July 2015 00:00
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: Re: [gem5-users] MSHR Queue Full Handling

Hello Davesh,

I think it should be possible by passing the desired L1  MSHR setting for each 
core, while instantiating the dcache in CacheConfig.py
Also look at the BaseCache constructor, to see how these parameters are being 
set.


Thanks,
Prathap

-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] MSHR Queue Full Handling

2015-07-15 Thread Prathap Kolakkampadath
Hello Davesh,

I think it should be possible by passing the desired L1  MSHR setting for
each core, while instantiating the dcache in CacheConfig.py
Also look at the BaseCache constructor, to see how these parameters are
being set.


Thanks,
Prathap
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] MSHR Queue Full Handling

2015-07-15 Thread Davesh Shingari
Hello Andreas

Thanks a lot for the reply. Yes it helped definitely.
I am posting what I found (might save some time for someone looking into
the same direction)

In cache_impl.hh file, in function recvTimingReq

   - You check whether it is a hit or miss by  bool satisfied =
   access(pkt, blk, lat, writebacks);
   - Then in case there is a miss and there is no MSHR (check line else {
   //no MSHR), there is a call to allocateMissBuffer which internally calls
   allocateBufferInternal. In this function isFull os called which checks 
   allocated  numEntries - numReserve

I have one question. Is there configuration way (i.e. by changing only in
Caches.py) to have different numbers of MSHR per core i.e. say L1 cache of
Core 0 has 10 while L1 cache of Core 1 has 20 MSHRs? If not then we can do
it in isFull function by having case statements.
ᐧ

On Tue, Jul 14, 2015 at 2:44 PM, Andreas Hansson andreas.hans...@arm.com
wrote:

  Hi Davesh,


1. The promoteDeferredTargets is a “trick” used in the classic memory
system where MSHR targets gets put “on hold” if we already have a target
outstanding that will e.g. give us a block in modified or owned state. Once
the first chunk of targets are resolved, and all caches agree on the new
state, we then continue with the deferred targets (after promoting them).
Thus, the deferred targets are serialised with respect to the normal
targets, simplifying any intermittent states.
2. When the queue is full, we block the cpu-side port. This is quite
crude, as there could be hits that would have been fine. However, at the
moment we take the easy way out.Have a look at setBlocked and 
 recvTimingReq.

 I hope that helps.

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Davesh
 Shingari shingaridav...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Tuesday, 14 July 2015 18:59
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: [gem5-users] MSHR Queue Full Handling

  Hi

  In *cache_impl.hh*, I can see that

  MSHRQueue *mq = mshr-queue;
 *bool wasFull = mq-isFull();*

  But wasFull is used only in following code:

  if (mshr-promoteDeferredTargets()) {
  // avoid later read getting stale data while write miss is
 // outstanding.. see comment in timingAccess()
 if (blk) {
 blk-status = ~BlkReadable;
 }
 mq = mshr-queue;
 mq-markPending(mshr);
 requestMemSideBus((RequestCause)mq-index, clockEdge() +
   pkt-lastWordDelay);
 } else {
 mq-deallocate(mshr);
 if (*wasFull  !mq-isFull()*) {
 clearBlocked((BlockedCause)mq-index);
 }

  I have 2 questions:

1. What does promoteDeferredTargets function do?
2. What happens when the queue is full.  For eg. In Caches.py I can
see that mshr for L2 is 20. What happens when the misses number exceeds 20?
Can someone point me to the handling code?



  --
  Have a great day!

  Thanks and Warm Regards
 Davesh Shingari
 Master's in Computer Engineering [EE]
 Arizona State University

  dshin...@asu.edu
   ᐧ

 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users




-- 
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

davesh.shing...@asu.edu
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] MSHR Queue Full Handling

2015-07-14 Thread Andreas Hansson
Hi Davesh,


 1.  The promoteDeferredTargets is a “trick” used in the classic memory system 
where MSHR targets gets put “on hold” if we already have a target outstanding 
that will e.g. give us a block in modified or owned state. Once the first chunk 
of targets are resolved, and all caches agree on the new state, we then 
continue with the deferred targets (after promoting them). Thus, the deferred 
targets are serialised with respect to the normal targets, simplifying any 
intermittent states.
 2.  When the queue is full, we block the cpu-side port. This is quite crude, 
as there could be hits that would have been fine. However, at the moment we 
take the easy way out.Have a look at setBlocked and recvTimingReq.

I hope that helps.

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Davesh Shingari shingaridav...@gmail.commailto:shingaridav...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Tuesday, 14 July 2015 18:59
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: [gem5-users] MSHR Queue Full Handling

Hi

In cache_impl.hh, I can see that

MSHRQueue *mq = mshr-queue;
bool wasFull = mq-isFull();

But wasFull is used only in following code:

if (mshr-promoteDeferredTargets()) {
// avoid later read getting stale data while write miss is
// outstanding.. see comment in timingAccess()
if (blk) {
blk-status = ~BlkReadable;
}
mq = mshr-queue;
mq-markPending(mshr);
requestMemSideBus((RequestCause)mq-index, clockEdge() +
  pkt-lastWordDelay);
} else {
mq-deallocate(mshr);
if (wasFull  !mq-isFull()) {
clearBlocked((BlockedCause)mq-index);
}

I have 2 questions:

 1.  What does promoteDeferredTargets function do?
 2.  What happens when the queue is full.  For eg. In Caches.py I can see that 
mshr for L2 is 20. What happens when the misses number exceeds 20? Can someone 
point me to the handling code?


--
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

dshin...@asu.edumailto:dshin...@asu.edu
[https://mailfoogae.appspot.com/t?sender=ac2hpbmdhcmlkYXZlc2hAZ21haWwuY29ttype=zerocontentguid=e27518a8-bc81-44ec-9b38-b67dc2b02b09]ᐧ

-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] MSHR Queue Full Handling

2015-07-14 Thread Davesh Shingari
Hi

In *cache_impl.hh*, I can see that

MSHRQueue *mq = mshr-queue;
*bool wasFull = mq-isFull();*

But wasFull is used only in following code:

if (mshr-promoteDeferredTargets()) {
// avoid later read getting stale data while write miss is
// outstanding.. see comment in timingAccess()
if (blk) {
blk-status = ~BlkReadable;
}
mq = mshr-queue;
mq-markPending(mshr);
requestMemSideBus((RequestCause)mq-index, clockEdge() +
  pkt-lastWordDelay);
} else {
mq-deallocate(mshr);
if (*wasFull  !mq-isFull()*) {
clearBlocked((BlockedCause)mq-index);
}

I have 2 questions:

   1. What does promoteDeferredTargets function do?
   2. What happens when the queue is full.  For eg. In Caches.py I can see
   that mshr for L2 is 20. What happens when the misses number exceeds 20? Can
   someone point me to the handling code?



-- 
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

dshin...@asu.edu
ᐧ
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users