[gem5-users] reg router configuration

2019-03-25 Thread nevethetha ganesan
Hello,
 I wish to add additional router parameters inside each router. Please tell
us under which file we must add that?
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Creating Blocking caches (i.e., 1 target per MSHR)

2019-03-25 Thread Abhishek Singh
(There was a typo in previous mail)

Hello Nikos and Everyone,

In src/mem/cache/cache.cc for function handleTimingReqMiss(), we only check
if there is an existing mshr entry corresponding to Miss Request. And then
we call  *BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
request_time);*

In BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
request_time) function if its the first time the cache line is having a
miss we create MSHR entry by calling allocateMissbuffer which is defined in
base.hh, this allocateMissBuffer function creates new MSHR entry and first
target of that entry, so I have modified allcateMissBuffer in base.hh to :

   MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
true)

{

MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,

pkt, time, order++,

allocOnFill(pkt->cmd));


if (mshrQueue.isFull()) {

setBlocked((BlockedCause)MSHRQueue_MSHRs);

}


*if (mshr->getNumTargets() == numTarget) {*



*//cout << "Blocked: " << name() << endl;*

*noTargetMSHR = mshr;*

*setBlocked(Blocked_NoTargets);*

*}*


if (sched_send) {

// schedule the send

schedMemSideSendEvent(time);

}


return mshr;

}

What highlighted part does it blocks the cache from receiving any request
from LSQ until we get clear the MSHR entry through recvTimingResp function.
Note that *mshr->getNumTargets() *will always be one as this is the very
first time an MSHR entry is created. But somehow I do not think this is
correctas the simulation is not terminating but it does when hello binary
is tested.

Now if in baseline gem5 the current commit if we* set tgts_per_mshr from
configs/common/Caches.py *for dcache to 1 and compare the stats with *
tgts_per_mshr
set to 2 *we can clearly see that * tgts_per_mshr set to 2 * has more
cycles from stat *system.cpu.dcache.blocked:**:no_targets* when compared to
1 target per mshr. This is because we never blocked the requests coming to
Dcache when we had only one target per MSHR.

On Mon, Mar 25, 2019 at 5:43 PM Abhishek Singh <
abhishek.singh199...@gmail.com> wrote:

> Hello Nikos and Everyone,
>
> In src/mem/cache/cache.cc for function handleTimingReqMiss(), we only
> check if there is an existing mshr entry corresponding to Miss Request. And
> then we call  *BaseCache::handleTimingReqMiss(pkt, mshr, blk,
> forward_time, request_time);*
>
> In BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
> request_time) function if its the first time the cache line is having a
> miss we create MSHR entry by calling allocateMissbuffer which is defined in
> base.hh, this allocateMissBuffer function creates new MSHR entry and first
> target or that entry, so I have modified allcateMissBuffer in base.hh to :
>
>MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
> true)
>
> {
>
> MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize),
> blkSize,
>
> pkt, time, order++,
>
> allocOnFill(pkt->cmd));
>
>
> if (mshrQueue.isFull()) {
>
> setBlocked((BlockedCause)MSHRQueue_MSHRs);
>
> }
>
>
> *if (mshr->getNumTargets() == numTarget) {*
>
>
>
> *//cout << "Blocked: " << name() << endl;*
>
> *noTargetMSHR = mshr;*
>
> *setBlocked(Blocked_NoTargets);*
>
> *}*
>
>
> if (sched_send) {
>
> // schedule the send
>
> schedMemSideSendEvent(time);
>
> }
>
>
> return mshr;
>
> }
>
> What highlighted part does it blocks the cache from receiving any request
> from LSQ until we get clear the MSHR entry through recvTimingResp function.
> Note that *mshr->getNumTargets() *will always be one as this is the very
> first time an MSHR entry is created. But somehow I do not think this is
> correctas the simulation is not terminating but it does when hello binary
> is tested.
>
> Now if in baseline gem5 the current commit if we* set tgts_per_mshr from
> configs/common/Caches.py *for dcache to 1 and compare the stats with * 
> tgts_per_mshr
> set to 2 *we can clearly see that * tgts_per_mshr set to 2 * has more
> cycles from stat *system.cpu.dcache.blocked:**:no_targets* when compared
> to 1 target per mshr. This is because we never blocked the requests coming
> to Dcache when we had only one target per MSHR.
>
>
> Best regards,
>
> Abhishek
>
>
> On Mon, Mar 25, 2019 at 5:09 PM Nikos Nikoleris 
> wrote:
>
>> Abhishek,
>>
>> In that case the code you should be looking at is in
>> src/mem/cache/cache.cc in the function handleTimingReqMiss()
>>
>> mshr->allocateTarget(pkt, forward_time, order++,
>>   allocOnFill(pkt->cmd));
>> if (mshr->getNumTargets() == numTarget) {
>>  noTa

Re: [gem5-users] Creating Blocking caches (i.e., 1 target per MSHR)

2019-03-25 Thread Abhishek Singh
Hello Nikos and Everyone,

In src/mem/cache/cache.cc for function handleTimingReqMiss(), we only check
if there is an existing mshr entry corresponding to Miss Request. And then
we call  *BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
request_time);*

In BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
request_time) function if its the first time the cache line is having a
miss we create MSHR entry by calling allocateMissbuffer which is defined in
base.hh, this allocateMissBuffer function creates new MSHR entry and first
target or that entry, so I have modified allcateMissBuffer in base.hh to :

   MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
true)

{

MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,

pkt, time, order++,

allocOnFill(pkt->cmd));


if (mshrQueue.isFull()) {

setBlocked((BlockedCause)MSHRQueue_MSHRs);

}


*if (mshr->getNumTargets() == numTarget) {*



*//cout << "Blocked: " << name() << endl;*

*noTargetMSHR = mshr;*

*setBlocked(Blocked_NoTargets);*

*}*


if (sched_send) {

// schedule the send

schedMemSideSendEvent(time);

}


return mshr;

}

What highlighted part does it blocks the cache from receiving any request
from LSQ until we get clear the MSHR entry through recvTimingResp function.
Note that *mshr->getNumTargets() *will always be one as this is the very
first time an MSHR entry is created. But somehow I do not think this is
correctas the simulation is not terminating but it does when hello binary
is tested.

Now if in baseline gem5 the current commit if we* set tgts_per_mshr from
configs/common/Caches.py *for dcache to 1 and compare the stats with *
tgts_per_mshr
set to 2 *we can clearly see that * tgts_per_mshr set to 2 * has more
cycles from stat *system.cpu.dcache.blocked:**:no_targets* when compared to
1 target per mshr. This is because we never blocked the requests coming to
Dcache when we had only one target per MSHR.


Best regards,

Abhishek


On Mon, Mar 25, 2019 at 5:09 PM Nikos Nikoleris 
wrote:

> Abhishek,
>
> In that case the code you should be looking at is in
> src/mem/cache/cache.cc in the function handleTimingReqMiss()
>
> mshr->allocateTarget(pkt, forward_time, order++,
>   allocOnFill(pkt->cmd));
> if (mshr->getNumTargets() == numTarget) {
>  noTargetMSHR = mshr;
>  setBlocked(Blocked_NoTargets);
> }
>
> I'm not sure why you don't see any stalls due to no_targets. You might
> want to check how we measure the relevant stats and also make sure that
> the simulation you run will trigger this.
>
> Nikos
>
>
> On 25/03/2019 20:19, Abhishek Singh wrote:
> > I want to have just 1 target per cache line in MHSR queue ie., one
> > target per MSHR entry, but if I set parameter tgt_per_mshr to be 1, I
> > get the number of blocked cycles to zero i.e, there is no blocking due
> > to full targets.
> >
> > If we see the allocateMissBuffer calls code in base.cc and base.hh, the
> > first time we allocate entry to a miss we create a mshr entry and target
> > for that and do not check the tgts_per_mshr parameter. The second time
> > when a miss occurs to the same cache line, we create target for that and
> > then check tgts_per_mshr to block the future requests.
> >
> > The quicker way to find if tgt_per_mshr is working or not when set to 1
> > is to check number for system.cpu.dcache.blocked::no_targets in
> > m5out/stats.txt.
> >
> >
> > Best regards,
> >
> > Abhishek
> >
> >
> >
> > On Mon, Mar 25, 2019 at 4:05 PM Nikos Nikoleris  > > wrote:
> >
> > Hi Abishek,
> >
> > A single MSHR can keep track of more than one requests for a given
> cache
> > line. If you set tgts_per_mshr to 1, then the MSHR will only be able
> to
> > keep track of a single request for any given cache line. But it can
> > still service requests to other cache lines by allocating more MSHRs.
> >
> > If you want the cache to block on a single request you will need to
> > limit both the number of MSHRs and tgts_per_mshr to 1.
> >
> > I hope this helps.
> >
> > Nikos
> >
> > On 25/03/2019 19:03, Abhishek Singh wrote:
> >  > Hello Everyone,
> >  >
> >  > I am trying to simulate D-cache with one target per mshr, I tried
> >  > changing the parameter "tgts_per_mshr" defined in
> >  > "configs/common/Caches.py"  to 1, it does not work.
> >  >
> >  > This is because when we allocate target for existing MSHR, we
> always
> >  > check the  "tgts_per_mshr" parameter after allocating the second
> > target
> >  > and blocks the requests coming from LSQ.
> >  >
> >  > I tried copying the blocking target code to an allocateMissBuffer
> >  > function defined and declared in "src/m

Re: [gem5-users] Creating Blocking caches (i.e., 1 target per MSHR)

2019-03-25 Thread Nikos Nikoleris
Abhishek,

In that case the code you should be looking at is in
src/mem/cache/cache.cc in the function handleTimingReqMiss()

mshr->allocateTarget(pkt, forward_time, order++,
  allocOnFill(pkt->cmd));
if (mshr->getNumTargets() == numTarget) {
 noTargetMSHR = mshr;
 setBlocked(Blocked_NoTargets);
}

I'm not sure why you don't see any stalls due to no_targets. You might
want to check how we measure the relevant stats and also make sure that
the simulation you run will trigger this.

Nikos


On 25/03/2019 20:19, Abhishek Singh wrote:
> I want to have just 1 target per cache line in MHSR queue ie., one
> target per MSHR entry, but if I set parameter tgt_per_mshr to be 1, I
> get the number of blocked cycles to zero i.e, there is no blocking due
> to full targets.
>
> If we see the allocateMissBuffer calls code in base.cc and base.hh, the
> first time we allocate entry to a miss we create a mshr entry and target
> for that and do not check the tgts_per_mshr parameter. The second time
> when a miss occurs to the same cache line, we create target for that and
> then check tgts_per_mshr to block the future requests.
>
> The quicker way to find if tgt_per_mshr is working or not when set to 1
> is to check number for system.cpu.dcache.blocked::no_targets in
> m5out/stats.txt.
>
>
> Best regards,
>
> Abhishek
>
>
>
> On Mon, Mar 25, 2019 at 4:05 PM Nikos Nikoleris  > wrote:
>
> Hi Abishek,
>
> A single MSHR can keep track of more than one requests for a given cache
> line. If you set tgts_per_mshr to 1, then the MSHR will only be able to
> keep track of a single request for any given cache line. But it can
> still service requests to other cache lines by allocating more MSHRs.
>
> If you want the cache to block on a single request you will need to
> limit both the number of MSHRs and tgts_per_mshr to 1.
>
> I hope this helps.
>
> Nikos
>
> On 25/03/2019 19:03, Abhishek Singh wrote:
>  > Hello Everyone,
>  >
>  > I am trying to simulate D-cache with one target per mshr, I tried
>  > changing the parameter "tgts_per_mshr" defined in
>  > "configs/common/Caches.py"  to 1, it does not work.
>  >
>  > This is because when we allocate target for existing MSHR, we always
>  > check the  "tgts_per_mshr" parameter after allocating the second
> target
>  > and blocks the requests coming from LSQ.
>  >
>  > I tried copying the blocking target code to an allocateMissBuffer
>  > function defined and declared in "src/mem/cache/base.hh" as:
>  >
>  >
>  > *_BEFORE_*:
>  >
>  > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool
> sched_send = true)
>  >
>  > {
>  >
>  > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
>  >
>  > pkt, time, order++,
>  >
>  > allocOnFill(pkt->cmd));
>  >
>  >
>  > if (mshrQueue.isFull()) {
>  >
>  > setBlocked((BlockedCause)MSHRQueue_MSHRs);
>  >
>  > }
>  >
>  >
>  > if (sched_send) {
>  >
>  > // schedule the send
>  >
>  > schedMemSideSendEvent(time);
>  >
>  > }
>  >
>  >
>  > return mshr;
>  >
>  > }
>  >
>  >
>  >
>  > *_AFTER:_*
>  >
>  >
>  >
>  > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool
> sched_send = true)
>  >
>  > {
>  >
>  > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
>  >
>  > pkt, time, order++,
>  >
>  > allocOnFill(pkt->cmd));
>  >
>  >
>  > if (mshrQueue.isFull()) {
>  >
>  > setBlocked((BlockedCause)MSHRQueue_MSHRs);
>  >
>  > }
>  >
>  >
>  > *if (mshr->getNumTargets() == numTarget) {*
>  >
>  > **
>  >
>  > *//cout << "Blocked: " << name() << endl;*
>  >
>  > *noTargetMSHR = mshr;*
>  >
>  > *setBlocked(Blocked_NoTargets);*
>  >
>  > *}*
>  >
>  > *
>  > *
>  >
>  > if (sched_send) {
>  >
>  > // schedule the send
>  >
>  > schedMemSideSendEvent(time);
>  >
>  > }
>  >
>  >
>  > return mshr;
>  >
>  > }
>  >
>  >
>  > I also change the "tgts_per_mshr" defined in
> "configs/common/Caches.py"
>  > to 1.
>  >
>  > But simulation goes into an infinite loop and does not end.
>  >
>  > Does anyone have the technique to create Blocking caches with 1
>  > target per MSHR entry?
>  >
>  >
>  >
>  > Best regards,
>  >
>  > Abhishek
>  >
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose
> the contents to any other person, use it for any purpose, or store
> or copy the information in any m

[gem5-users] Switching off AVX-512 for running SPEC 2017 in SE mode

2019-03-25 Thread Kleovoulos Kalaitzidis
Hello, 
I want to run the SPEC 2017 workloads in SE mode(X86), but when I do many of 
them run into an error about an unrecognised instruction. After some search I 
found this thread in the list : 
https://gem5-users.gem5.narkive.com/jO22f7kV/spec2017-on-gem5-se-mode where 
there was described exactly the same problem. The unrecognised instruction 
seems to be an AVX-512 one. I actually dumbed the assembly code of the binaries 
and I see some of them, so as instructed I compile benchmarks with "-mno-avx 
-mno-avx512f .." and many other flags to switch off these AVX instructions, but 
still the problem is the same. Maybe the question is not that gem5-relevant, 
but if somebody has successfully disabled these instructions and run in SE 
(X86) the SPEC 2017 benchmarks, would be of great help. 

Thank you in advance, 

-- 
Kleovoulos Kalaitzidis 
Doctorant - Équipe PACAP 

Centre de recherche INRIA Rennes - Bretagne Atlantique 
Bâtiment 12E, Bureau E321, Campus de Beaulieu, 
35042 Rennes Cedex, France 
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Creating Blocking caches (i.e., 1 target per MSHR)

2019-03-25 Thread Abhishek Singh
I want to have just 1 target per cache line in MHSR queue ie., one target
per MSHR entry, but if I set parameter tgt_per_mshr to be 1, I get the
number of blocked cycles to zero i.e, there is no blocking due to full
targets.

If we see the allocateMissBuffer calls code in base.cc and base.hh, the
first time we allocate entry to a miss we create a mshr entry and target
for that and do not check the tgts_per_mshr parameter. The second time when
a miss occurs to the same cache line, we create target for that and then
check tgts_per_mshr to block the future requests.

The quicker way to find if tgt_per_mshr is working or not when set to 1 is
to check number for system.cpu.dcache.blocked::no_targets in
m5out/stats.txt.


Best regards,

Abhishek


On Mon, Mar 25, 2019 at 4:05 PM Nikos Nikoleris 
wrote:

> Hi Abishek,
>
> A single MSHR can keep track of more than one requests for a given cache
> line. If you set tgts_per_mshr to 1, then the MSHR will only be able to
> keep track of a single request for any given cache line. But it can
> still service requests to other cache lines by allocating more MSHRs.
>
> If you want the cache to block on a single request you will need to
> limit both the number of MSHRs and tgts_per_mshr to 1.
>
> I hope this helps.
>
> Nikos
>
> On 25/03/2019 19:03, Abhishek Singh wrote:
> > Hello Everyone,
> >
> > I am trying to simulate D-cache with one target per mshr, I tried
> > changing the parameter "tgts_per_mshr" defined in
> > "configs/common/Caches.py"  to 1, it does not work.
> >
> > This is because when we allocate target for existing MSHR, we always
> > check the  "tgts_per_mshr" parameter after allocating the second target
> > and blocks the requests coming from LSQ.
> >
> > I tried copying the blocking target code to an allocateMissBuffer
> > function defined and declared in "src/mem/cache/base.hh" as:
> >
> >
> > *_BEFORE_*:
> >
> > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
> true)
> >
> > {
> >
> > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
> >
> > pkt, time, order++,
> >
> > allocOnFill(pkt->cmd));
> >
> >
> > if (mshrQueue.isFull()) {
> >
> > setBlocked((BlockedCause)MSHRQueue_MSHRs);
> >
> > }
> >
> >
> > if (sched_send) {
> >
> > // schedule the send
> >
> > schedMemSideSendEvent(time);
> >
> > }
> >
> >
> > return mshr;
> >
> > }
> >
> >
> >
> > *_AFTER:_*
> >
> >
> >
> > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
> true)
> >
> > {
> >
> > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
> >
> > pkt, time, order++,
> >
> > allocOnFill(pkt->cmd));
> >
> >
> > if (mshrQueue.isFull()) {
> >
> > setBlocked((BlockedCause)MSHRQueue_MSHRs);
> >
> > }
> >
> >
> > *if (mshr->getNumTargets() == numTarget) {*
> >
> > **
> >
> > *//cout << "Blocked: " << name() << endl;*
> >
> > *noTargetMSHR = mshr;*
> >
> > *setBlocked(Blocked_NoTargets);*
> >
> > *}*
> >
> > *
> > *
> >
> > if (sched_send) {
> >
> > // schedule the send
> >
> > schedMemSideSendEvent(time);
> >
> > }
> >
> >
> > return mshr;
> >
> > }
> >
> >
> > I also change the "tgts_per_mshr" defined in "configs/common/Caches.py"
> > to 1.
> >
> > But simulation goes into an infinite loop and does not end.
> >
> > Does anyone have the technique to create Blocking caches with 1
> > target per MSHR entry?
> >
> >
> >
> > Best regards,
> >
> > Abhishek
> >
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Creating Blocking caches (i.e., 1 target per MSHR)

2019-03-25 Thread Nikos Nikoleris
Hi Abishek,

A single MSHR can keep track of more than one requests for a given cache
line. If you set tgts_per_mshr to 1, then the MSHR will only be able to
keep track of a single request for any given cache line. But it can
still service requests to other cache lines by allocating more MSHRs.

If you want the cache to block on a single request you will need to
limit both the number of MSHRs and tgts_per_mshr to 1.

I hope this helps.

Nikos

On 25/03/2019 19:03, Abhishek Singh wrote:
> Hello Everyone,
>
> I am trying to simulate D-cache with one target per mshr, I tried
> changing the parameter "tgts_per_mshr" defined in
> "configs/common/Caches.py"  to 1, it does not work.
>
> This is because when we allocate target for existing MSHR, we always
> check the  "tgts_per_mshr" parameter after allocating the second target
> and blocks the requests coming from LSQ.
>
> I tried copying the blocking target code to an allocateMissBuffer
> function defined and declared in "src/mem/cache/base.hh" as:
>
>
> *_BEFORE_*:
>
> MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send = true)
>
> {
>
> MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
>
> pkt, time, order++,
>
> allocOnFill(pkt->cmd));
>
>
> if (mshrQueue.isFull()) {
>
> setBlocked((BlockedCause)MSHRQueue_MSHRs);
>
> }
>
>
> if (sched_send) {
>
> // schedule the send
>
> schedMemSideSendEvent(time);
>
> }
>
>
> return mshr;
>
> }
>
>
>
> *_AFTER:_*
>
>
>
> MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send = true)
>
> {
>
> MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
>
> pkt, time, order++,
>
> allocOnFill(pkt->cmd));
>
>
> if (mshrQueue.isFull()) {
>
> setBlocked((BlockedCause)MSHRQueue_MSHRs);
>
> }
>
>
> *if (mshr->getNumTargets() == numTarget) {*
>
> **
>
> *//cout << "Blocked: " << name() << endl;*
>
> *noTargetMSHR = mshr;*
>
> *setBlocked(Blocked_NoTargets);*
>
> *}*
>
> *
> *
>
> if (sched_send) {
>
> // schedule the send
>
> schedMemSideSendEvent(time);
>
> }
>
>
> return mshr;
>
> }
>
>
> I also change the "tgts_per_mshr" defined in "configs/common/Caches.py"
> to 1.
>
> But simulation goes into an infinite loop and does not end.
>
> Does anyone have the technique to create Blocking caches with 1
> target per MSHR entry?
>
>
>
> Best regards,
>
> Abhishek
>
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] ValueError in Garnet_standalone

2019-03-25 Thread Rishabh Jain
Yesss! It worked :)

On Mon, Mar 25, 2019 at 10:17 AM Gambord, Ryan 
wrote:

> I believe I've found the commit that broke this. Try rebasing and removing
> this commit and see if it works for you. It did for me.
>
>
> https://gem5-review.googlesource.com/c/public/gem5/+/16708/4/configs/common/Options.py#b44
>
> Ryan Gambord
>
>
> On Sat, Mar 23, 2019 at 9:40 PM Rishabh Jain 
> wrote:
>
>>
>> Hi Jason,
>>
>> Even though I remove the ' . ' before Benchmarks at line 48 in
>> "configs/common/Options.py", I still get error. This time it is an Import
>> Error.
>> """
>> Traceback (most recent call last):
>>   File "", line 1, in 
>>   File "build/NULL/python/m5/main.py", line 438, in main
>> exec(filecode, scope)
>>   File "configs/example/garnet_synth_traffic.py", line 40, in 
>> from common import Options
>>   File "/home/muts/Downloads/gem5/configs/common/Options.py", line 48, in
>> 
>> from Benchmarks import *
>> ImportError: No module named Benchmarks
>> """
>>
>> It's weird, as file Benchmarks.py is present in common only. Can you
>> check from your side?
>> or am I missing something?
>>
>>
>>
>>
>> On Thu, Mar 21, 2019 at 10:05 PM Jason Lowe-Power 
>> wrote:
>>
>>> I think the solution is to remove the "." before Benchmarks in line 48.
>>> However, it seems surprising that this hasn't been caught before. Are you
>>> using Python 2.7?
>>>
>>> Jason
>>>
>>> On Tue, Mar 19, 2019 at 10:33 AM Rishabh Jain 
>>> wrote:
>>>
 I forgot to add the network flag:
 $ ./build/NULL/gem5.debug configs/example/garnet_synth_traffic.py
 --network=garnet2.0 --topology=Mesh_XY --num-cpus=16 --num-dirs=16
 --mesh-rows=4
 But still the same *ValueError* persists.


 On Tue, Mar 19, 2019 at 10:50 PM Rishabh Jain 
 wrote:

> Hello everyone,
>
> I have build the garnet standalone model using:
> $scons build/NULL/gem5.debug PROTOCOL=Garnet_standalone -j9
>
> While running command:
> $ ./build/NULL/gem5.debug configs/example/garnet_synth_traffic.py
> --topology=Mesh_XY --num-cpus=16 --num-dirs=16 --mesh-rows=4
>
> I get a ValueError: " ValueError: Attempted relative import in
> non-package"
>
> The console log is :
> **
> gem5 Simulator System.  http://gem5.org
> gem5 is copyrighted software; use the --copyright option for details.
>
> gem5 compiled Mar 19 2019 22:22:28
> gem5 started Mar 19 2019 22:40:18
> gem5 executing on redbull, pid 29669
> command line: ./build/NULL/gem5.debug
> configs/example/garnet_synth_traffic.py --topology=Mesh_XY --num-cpus=16
> --num-dirs=16 --mesh-rows=4
>
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "build/NULL/python/m5/main.py", line 438, in main
> exec(filecode, scope)
>   File "configs/example/garnet_synth_traffic.py", line 92, in 
> os.path.join(config_root, "common", "Options.py"), 'exec'))
>   File "/home/rishu/rishu/gem5/configs/common/Options.py", line 48, in
> 
> from .Benchmarks import *
> ValueError: Attempted relative import in non-package
> *
>
> But, when I use the mercurial repository of gem5 (I know its
> outdated), the above command works just fine.
> Can somebody help me in debugging the error?
>
> Best,
> Rishabh Jain
>
 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>
>>> ___
>>> gem5-users mailing list
>>> gem5-users@gem5.org
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Creating Blocking caches (i.e., 1 target per MSHR)

2019-03-25 Thread Abhishek Singh
Hello Everyone,

I am trying to simulate D-cache with one target per mshr, I tried changing
the parameter "tgts_per_mshr" defined in "configs/common/Caches.py"  to 1,
it does not work.

This is because when we allocate target for existing MSHR, we always check
the  "tgts_per_mshr" parameter after allocating the second target and
blocks the requests coming from LSQ.

I tried copying the blocking target code to an allocateMissBuffer function
defined and declared in "src/mem/cache/base.hh" as:


*BEFORE*:

MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
true)

{

MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,

pkt, time, order++,

allocOnFill(pkt->cmd));


if (mshrQueue.isFull()) {

setBlocked((BlockedCause)MSHRQueue_MSHRs);

}


if (sched_send) {

// schedule the send

schedMemSideSendEvent(time);

}


return mshr;

}



*AFTER:*


MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
true)

{

MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,

pkt, time, order++,

allocOnFill(pkt->cmd));


if (mshrQueue.isFull()) {

setBlocked((BlockedCause)MSHRQueue_MSHRs);

}


*if (mshr->getNumTargets() == numTarget) {*



*//cout << "Blocked: " << name() << endl;*

*noTargetMSHR = mshr;*

*setBlocked(Blocked_NoTargets);*

*}*


if (sched_send) {

// schedule the send

schedMemSideSendEvent(time);

}


return mshr;

}

I also change the "tgts_per_mshr" defined in "configs/common/Caches.py" to
1.

But simulation goes into an infinite loop and does not end.

Does anyone have the technique to create Blocking caches with 1 target per
MSHR entry?



Best regards,

Abhishek
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] m5threads: look for address on a different bus

2019-03-25 Thread Rujuta

Hi all,

My configuration has 2 CPUs, one of which has a local memory.


  cpu0  cpu1

  |    |

  |   X --- Local Memory

  |    |

          __

         |            DRAM        |


In SE mode, I use m5threads to create thread1 which runs on cpu1 while 
the main thread runs on cpu0.


I want to write to the local memory from thread1. I get the error :

fatal: Unable to find destination for [0x2000 : 0x2003] on 
system.membus0



How to tell the simulation to look for that address on the local memory bus?


The goal is to access the local memory from cpu1. It works if I assign 
cpu1 its own application. How to achieve the same with pthreads?


Kind Regards,

Rujuta Kulkarni

___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users