Re: [gem5-users] How queued port is modelled in real platforms?

2015-07-27 Thread Andreas Hansson
Hi Prathap,

100 was chosen to be “sufficiently infinite”, and only break if something is 
wrong.

The caches have a limited number of MSHRs, the cores have limited LSQ depth 
etc. We could easily add an outstanding transaction limit to the crossbar 
class. In the end it is a detail/speed trade-off. If it does not matter, do not 
model it…

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Monday, 27 July 2015 15:15
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: Re: [gem5-users] How queued port is modelled in real platforms?

Hello Andreas,

Currently, the reasonable limit of this queue is set to 100. Is there a 
specific reason to choose this as the maximum packet queue size.
Do any bus interface protocol specifies this limit in real platforms?

Thanks,
Prathap

On Mon, Jul 27, 2015 at 4:54 AM, Andreas Hansson 
andreas.hans...@arm.commailto:andreas.hans...@arm.com wrote:
Hi Prathap,

The queued port is indeed infinite, and is a convenience construct. It should 
only be used in places where there is already an inherent limit to the number 
of outstanding requests. There is an assert in the queued port to ensure things 
do not grow uncontrollably.

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Sunday, 26 July 2015 18:34
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: [gem5-users] How queued port is modelled in real platforms?

Hell Users,

Gem5 implements a queued port to interface memory objects. In my understanding 
this queued port is of infinite size. Is this specific to Gem5 implementation? 
How packets are handled in real hardware if the request rate of a layer is 
faster than the service rate of underlying layer?
It would be great if someone could help me in understanding this.

Thanks,
Prathap



-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782

___
gem5-users mailing list
gem5-users@gem5.orgmailto:gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users


-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] How queued port is modelled in real platforms?

2015-07-27 Thread Prathap Kolakkampadath
Hello Andreas,

I have modelled a system with large MSHRs, LSQ depth etc. With this i could
see that the packet size grows beyond 100 and hits this assertion. After
disabling this assertion the test runs to completion.

1)Is it safe to disable this?

However as i mentioned in an earlier email, i have modified the DRAM
controller switching algorithm to prioritize reads and never switch to
writes as long as there are reads in the read buffer. With this
modification, In one set of memory intensive benchmarks with high page hit
rate, I could see that  min_number_of_writes_per switch is ~15 . I expect
that the write buffer(DRAM and cache) gets full as a result the core
stalls, and no more requests arrives at DRAM controller. Once the DRAM
controller drains the existing reads it switches to writes and when a write
is serviced, and corresponding buffer is freed, core can generate a new
load/store. But the number of writes per switch(15) that i see doesn't
justify the round trip time.

Further debugging this issue, I observed that once the write queue/ write
buffers are full, and when the DRAM controller service the queued reads,
which generates the write backs(due to eviction). Note that DRAM controller
write_buffer is full at this time. These write backs will be get queued in
the port(deferred packets) and any further reads will be queued at the end
of write backs.

2) Is this a desired behaviour? to address write after read hazard?

Thanks,
Prathap



On Mon, Jul 27, 2015 at 9:22 AM, Andreas Hansson andreas.hans...@arm.com
wrote:

  Hi Prathap,

  100 was chosen to be “sufficiently infinite”, and only break if
 something is wrong.

  The caches have a limited number of MSHRs, the cores have limited LSQ
 depth etc. We could easily add an outstanding transaction limit to the
 crossbar class. In the end it is a detail/speed trade-off. If it does not
 matter, do not model it…

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap
 Kolakkampadath kvprat...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Monday, 27 July 2015 15:15
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: Re: [gem5-users] How queued port is modelled in real platforms?

Hello Andreas,

  Currently, the reasonable limit of this queue is set to 100. Is there a
 specific reason to choose this as the maximum packet queue size.
  Do any bus interface protocol specifies this limit in real platforms?

  Thanks,
  Prathap

 On Mon, Jul 27, 2015 at 4:54 AM, Andreas Hansson andreas.hans...@arm.com
 wrote:

  Hi Prathap,

  The queued port is indeed infinite, and is a convenience construct. It
 should only be used in places where there is already an inherent limit to
 the number of outstanding requests. There is an assert in the queued port
 to ensure things do not grow uncontrollably.

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap
 Kolakkampadath kvprat...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Sunday, 26 July 2015 18:34
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: [gem5-users] How queued port is modelled in real platforms?

   Hell Users,

  Gem5 implements a queued port to interface memory objects. In my
 understanding this queued port is of infinite size. Is this specific to
 Gem5 implementation? How packets are handled in real hardware if the
 request rate of a layer is faster than the service rate of underlying layer?
  It would be great if someone could help me in understanding this.

  Thanks,
  Prathap



 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users



 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 

Re: [gem5-users] Handling write backs

2015-07-27 Thread Prathap Kolakkampadath
Hello Andreas,

Now i understand.

Thanks,
Prathap



On Mon, Jul 27, 2015 at 4:49 AM, Andreas Hansson andreas.hans...@arm.com
wrote:

  Hi Prathap,

  When you write with a granularity smaller than a cache line (to your L1
 D cache), the cache will read the line in exclusive state, and then write
 the specified part. If you write a whole line, then there is no need to
 first read. The latter behaviour is supported for whole-line write
 operations only.

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap
 Kolakkampadath kvprat...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Tuesday, 21 July 2015 23:14
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: Re: [gem5-users] Handling write backs

  Hello Users,

  I figured out that the Gem5 implements fetch-on-write-miss policy.
 On a write miss, allocateMissBuffer()  is called to allocate an MSHR ;
 which send the timing request to bring this cache line.
  Once the request is ready, in the response path, handleFill() is called,
 which is responsible to insert the block in to the cache. While inserting,
 if the replacing  victim block is dirty;a write back packet is generated
 and is copied to write buffers.
  After which satisfyCpuSideRequest() is called to write the data to the
 newly assigned block and marks it as dirty.

  Thanks,
  Prathap






 On Tue, Jul 21, 2015 at 11:21 AM, Prathap Kolakkampadath 
 kvprat...@gmail.com wrote:

   Hello Users,

  I am using classic memory system. What is the write miss policy
 implemented in Gem5?
  Looking at the code it looks like, gem5 implements
 *no-fetch-on-write-miss* policy; the access() inserts a block in cache,
 when the request is writeback and it misses the cache.
  However, when i run a test with bunch of write misses, i see equal
 number of reads and writes to DRAM memory. This could happen if the policy
 is
 *fetch-on-write-miss.* So far i couldn't figure this out. It would be
 great if someone can throw some pointers to understand this further.

  Thanks,
  Prathap

 On Mon, Jul 20, 2015 at 2:02 PM, Prathap Kolakkampadath 
 kvprat...@gmail.com wrote:

  Hello Users,

  I am running a test which generate write misses to LLC. I am looking
 at the cache implementation code. What i understood is, write are treated
 as write backs; on miss, write back commands allocate a new block in cache
 and write the data into it and marks this block as dirty. When the dirty
 blocks are replaced,these will be written in to write buffers.

  I have following questions on this:
  1) When i run the test which generates write misses, i see same
 number  of reads from memory as the number of writes. Does this mean, write
 backs also fetches the cache-line from main memory?

  2) When the blocks in write buffers will be  written to memory? Is it
 written when the write buffers are full?

  It would be great if someone can help me in understanding this.


  Thanks,
  Prathap




 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] How queued port is modelled in real platforms?

2015-07-27 Thread Prathap Kolakkampadath
Hello Andreas,

Currently, the reasonable limit of this queue is set to 100. Is there a
specific reason to choose this as the maximum packet queue size.
Do any bus interface protocol specifies this limit in real platforms?

Thanks,
Prathap

On Mon, Jul 27, 2015 at 4:54 AM, Andreas Hansson andreas.hans...@arm.com
wrote:

  Hi Prathap,

  The queued port is indeed infinite, and is a convenience construct. It
 should only be used in places where there is already an inherent limit to
 the number of outstanding requests. There is an assert in the queued port
 to ensure things do not grow uncontrollably.

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap
 Kolakkampadath kvprat...@gmail.com
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Sunday, 26 July 2015 18:34
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: [gem5-users] How queued port is modelled in real platforms?

   Hell Users,

  Gem5 implements a queued port to interface memory objects. In my
 understanding this queued port is of infinite size. Is this specific to
 Gem5 implementation? How packets are handled in real hardware if the
 request rate of a layer is faster than the service rate of underlying layer?
  It would be great if someone could help me in understanding this.

  Thanks,
  Prathap



 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Regarding benchmarks

2015-07-27 Thread Davesh Shingari
Hi Lokesh

For running benchmarks, you need to load the benchmarks binaries on the
disk image and then you can run them through scripts. Try looking at the
http://www.m5sim.org/Running_gem5#Full_System_Benchmarks. You can see how
scripts works and how they call the benchmarks binaries present in the disk.

Try looking at following links fo Mcpat integration:
http://qa.gem5.org/4/there-been-complete-integration-mcpat-with-gem5-x86-how-use
https://www.mail-archive.com/gem5-users@gem5.org/msg08978.html
http://comments.gmane.org/gmane.comp.emulators.m5.users/16664

ᐧ

On Mon, Jul 27, 2015 at 11:00 AM, lokesh Sasikanth Kallam ypm...@gmail.com
wrote:

 Hello ,

   I am a new user to gem5 simulator , I have installed gem5 and
 now I have to run some bench marks on it . can any please help me how to
 run spec 2006 benchmark for x86 ,and where to start the process ,and how to
 integrate Mcpat with gem5.

 Thanks in advance ..

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users




-- 
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

davesh.shing...@asu.edu
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] FS Checkpointing Hangs (ARM ISA)

2015-07-27 Thread Dave Kindel
Hi all,

I've been trying to run some FS tests on ARM.  I just tried to run gem5
without modifications but using a runscript and FS mode on a detailed CPU.
When it tried to take a checkpoint (whether with a m5 hook compiled in the
code or through an m5 command in the runscript) it hung.  Ctrl-c did
nothing but restart the event loop.so I had to send a sigkill manually.  My
command line is:

./build/ARM/gem5.debug configs/example/fs.py --machine-type=VExpress_EMM
-n 4 --script=/home/dkindel/runscripts/4c_ckpt_test.rcS --caches
--cpu-type=detailed

My runscript contains:

cd /parsec/install/bin.ckpts
/sbin/m5 dumpstats
/sbin/m5 resetstats
echo Before CKPT
/sbin/m5 checkpoint
echo Done :D
/sbin/m5 exit
/sbin/m5 exit


On the output, I see the Before CKPT displayed but nothing after.

This is in the dev repo.  In the stable repo, I run with 1 core and get
that the skid buffer exceeded the max size after the Before CKPT and
before Done.  I'm not sure if it's something on my machine or not.

I appreciate any help!

Thanks,
Dave Kindel
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Why cache misses are decreasing when core frequency increase?

2015-07-27 Thread Nimish Girdhar
Hi Andreas,
Yes I am running with out of order core. I looked at the demand_mshr stats
and they pretty much follow the same trend...
On Jul 27, 2015 2:41 AM, Andreas Hansson andreas.hans...@arm.com wrote:

  Hi Nimish,

  Double check the stats. If I remember correctly, “demand_mshr_miss_rate”,
 may be what you are looking for. The stat you list may be misleading due to
 several accesses to the same line (are you running with the out-of-order
 core?).

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Nimish
 Girdhar nimi...@tamu.edu
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Friday, 17 July 2015 18:28
 To: gem5 users mailing list gem5-users@gem5.org
 Subject: Re: [gem5-users] Why cache misses are decreasing when core
 frequency increase?

  Thanks for replying Andreas, Yes I am running the same workload with
 same instructions.

  Below are the stats that I got:-


   2Ghz


   4Ghz

 sim_seconds

 3.464073


   1.73349

 sim_insts

 38015556363


   37585527017

 system.cpu0.icache.demand_miss_rate::total

 0.00057


   0.000382

 system.cpu0.icache.demand_misses::total

 2160668


   1435056

 system.cpu0.dcache.demand_miss_rate::total

 0.009694


   0.00833

 system.cpu0.dcache.demand_misses::total

 10080210


   8412882

 I am seeing a similar pattern in all cpu's cache stats.

  Yes I am doing a hackish DVFS where I am changing clock speed behind the
 back of the OS. From my research project point of view, I am just concerned
 with the simulation times as I am doing different experiments and just the
 simulation time matters for me. But I still want to reason the cache
 behavior, as long as I am able to do that, I should be okay. That's why I
 need some help with the reasoning as to why I am seeing the above stats?

  Any thoughts?

  Thanks,


 On Thu, Jul 16, 2015 at 3:06 PM, Andreas Hansson andreas.hans...@arm.com
 wrote:

  Hi Nimish,

  How do you determine cache misses (what stat are you looking at)? Are
 you running the same workload in the two scenarios (i.e. are the actual
 instructions executed the same)? Is it full system (and if so, are you
 changing the core frequency without the OS knowing about it)?

  Can you shed some more light on your experiment? Overall I’d say it’s a
 bad idea to be changing clocks behind the OS’s back...

  Andreas

   From: gem5-users gem5-users-boun...@gem5.org on behalf of Nimish
 Girdhar nimi...@tamu.edu
 Reply-To: gem5 users mailing list gem5-users@gem5.org
 Date: Thursday, 16 July 2015 23:00
 To: gem5 users mailing list gem5-users@gem5.org
 Cc: Gaurav Sharma gaurav1...@gmail.com
 Subject: Re: [gem5-users] Why cache misses are decreasing when core
 frequency increase?

  Anybody has any idea what might be happening here??
 Any help will be appreciated..
 Thanks,
 On Jul 14, 2015 9:38 AM, Nimish Girdhar nimi...@tamu.edu wrote:

 Hello,

  I am trying to use DVFS for my project. But I want the frequency
 control in hardware so I cannot use the DVFS support given by Gem5 as that
 is on kernel level. For my project I added each core and their l1 caches to
 a different clock domains and hacked the code to change the frequency of
 the domains whenever I wanted.

  To check if it is working I fired two runs, one with default frequency
 settings (which is 2 Ghz) and in the other run I double the frequency of
 each domain, so each core runs on 4Ghz.

  Now looking at the stats, I see simulation time dropping to almost
 half which is expected. But I am not able to reason the cache stats. I am
 seeing the cache misses for all caches also decreasing by almost half. Can
 anybody reason how is that happening?

  I am running arm Full system with classic memory model. All memory
 settings are default.

  Thanks,
 --
  Warm regards
 Nimish Girdhar
 Department of Electrical and Computer Engineering
 Texas AM University


 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy the
 information in any medium. Thank you.

 ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2557590
 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
 Registered in England  Wales, Company No: 2548782

 ___
 gem5-users mailing list
 gem5-users@gem5.org
 http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users




  --
  Warm regards
 Nimish Girdhar
 Department of Electrical and Computer Engineering
 Texas AM University

 -- IMPORTANT NOTICE: The contents of this email and any attachments are
 confidential and may also be privileged. If you are not the intended
 recipient, please notify the sender immediately and do not disclose the
 contents to any other person, use it for any purpose, or store or copy 

[gem5-users] Data trace

2015-07-27 Thread ‪Niloofar Shakiba‬ ‪
hi, 
im working on ptoject that i should get 512-bits data trace.i test some mathods 
with changing flags, but it didn't work.i just want to know is it possible to 
get 512-bit data trace by gem5.thx

___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Why cache misses are decreasing when core frequency increase?

2015-07-27 Thread Andreas Hansson
Hi Nimish,

Double check the stats. If I remember correctly, “demand_mshr_miss_rate”, may 
be what you are looking for. The stat you list may be misleading due to several 
accesses to the same line (are you running with the out-of-order core?).

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Nimish Girdhar nimi...@tamu.edumailto:nimi...@tamu.edu
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Friday, 17 July 2015 18:28
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: Re: [gem5-users] Why cache misses are decreasing when core frequency 
increase?

Thanks for replying Andreas, Yes I am running the same workload with same 
instructions.

Below are the stats that I got:-




2Ghz




4Ghz


sim_seconds


3.464073




1.73349


sim_insts


38015556363




37585527017


system.cpu0.icache.demand_miss_rate::total


0.00057




0.000382


system.cpu0.icache.demand_misses::total


2160668




1435056


system.cpu0.dcache.demand_miss_rate::total


0.009694




0.00833


system.cpu0.dcache.demand_misses::total


10080210




8412882


I am seeing a similar pattern in all cpu's cache stats.

Yes I am doing a hackish DVFS where I am changing clock speed behind the back 
of the OS. From my research project point of view, I am just concerned with the 
simulation times as I am doing different experiments and just the simulation 
time matters for me. But I still want to reason the cache behavior, as long as 
I am able to do that, I should be okay. That's why I need some help with the 
reasoning as to why I am seeing the above stats?

Any thoughts?

Thanks,


On Thu, Jul 16, 2015 at 3:06 PM, Andreas Hansson 
andreas.hans...@arm.commailto:andreas.hans...@arm.com wrote:
Hi Nimish,

How do you determine cache misses (what stat are you looking at)? Are you 
running the same workload in the two scenarios (i.e. are the actual 
instructions executed the same)? Is it full system (and if so, are you changing 
the core frequency without the OS knowing about it)?

Can you shed some more light on your experiment? Overall I’d say it’s a bad 
idea to be changing clocks behind the OS’s back...

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Nimish Girdhar nimi...@tamu.edumailto:nimi...@tamu.edu
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Thursday, 16 July 2015 23:00
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Cc: Gaurav Sharma gaurav1...@gmail.commailto:gaurav1...@gmail.com
Subject: Re: [gem5-users] Why cache misses are decreasing when core frequency 
increase?


Anybody has any idea what might be happening here??
Any help will be appreciated..
Thanks,

On Jul 14, 2015 9:38 AM, Nimish Girdhar 
nimi...@tamu.edumailto:nimi...@tamu.edu wrote:
Hello,

I am trying to use DVFS for my project. But I want the frequency control in 
hardware so I cannot use the DVFS support given by Gem5 as that is on kernel 
level. For my project I added each core and their l1 caches to a different 
clock domains and hacked the code to change the frequency of the domains 
whenever I wanted.

To check if it is working I fired two runs, one with default frequency settings 
(which is 2 Ghz) and in the other run I double the frequency of each domain, so 
each core runs on 4Ghz.

Now looking at the stats, I see simulation time dropping to almost half which 
is expected. But I am not able to reason the cache stats. I am seeing the cache 
misses for all caches also decreasing by almost half. Can anybody reason how is 
that happening?

I am running arm Full system with classic memory model. All memory settings are 
default.

Thanks,
--
Warm regards
Nimish Girdhar
Department of Electrical and Computer Engineering
Texas AM University

-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782

___
gem5-users mailing list
gem5-users@gem5.orgmailto:gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users



--
Warm regards
Nimish Girdhar
Department of Electrical and Computer Engineering
Texas AM University

-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it 

Re: [gem5-users] Handling write backs

2015-07-27 Thread Andreas Hansson
Hi Prathap,

When you write with a granularity smaller than a cache line (to your L1 D 
cache), the cache will read the line in exclusive state, and then write the 
specified part. If you write a whole line, then there is no need to first read. 
The latter behaviour is supported for whole-line write operations only.

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Tuesday, 21 July 2015 23:14
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: Re: [gem5-users] Handling write backs

Hello Users,

I figured out that the Gem5 implements fetch-on-write-miss policy.
On a write miss, allocateMissBuffer()  is called to allocate an MSHR ; which 
send the timing request to bring this cache line.
Once the request is ready, in the response path, handleFill() is called, which 
is responsible to insert the block in to the cache. While inserting, if the 
replacing  victim block is dirty;a write back packet is generated and is copied 
to write buffers.
After which satisfyCpuSideRequest() is called to write the data to the newly 
assigned block and marks it as dirty.

Thanks,
Prathap






On Tue, Jul 21, 2015 at 11:21 AM, Prathap Kolakkampadath 
kvprat...@gmail.commailto:kvprat...@gmail.com wrote:
Hello Users,

I am using classic memory system. What is the write miss policy implemented in 
Gem5?
Looking at the code it looks like, gem5 implements no-fetch-on-write-miss 
policy; the access() inserts a block in cache, when the request is writeback 
and it misses the cache.
However, when i run a test with bunch of write misses, i see equal number of 
reads and writes to DRAM memory. This could happen if the policy is
fetch-on-write-miss. So far i couldn't figure this out. It would be great if 
someone can throw some pointers to understand this further.

Thanks,
Prathap

On Mon, Jul 20, 2015 at 2:02 PM, Prathap Kolakkampadath 
kvprat...@gmail.commailto:kvprat...@gmail.com wrote:
Hello Users,

I am running a test which generate write misses to LLC. I am looking at the 
cache implementation code. What i understood is, write are treated as write 
backs; on miss, write back commands allocate a new block in cache and write the 
data into it and marks this block as dirty. When the dirty blocks are 
replaced,these will be written in to write buffers.

I have following questions on this:
1) When i run the test which generates write misses, i see same number  of 
reads from memory as the number of writes. Does this mean, write backs also 
fetches the cache-line from main memory?

2) When the blocks in write buffers will be  written to memory? Is it written 
when the write buffers are full?

It would be great if someone can help me in understanding this.


Thanks,
Prathap




-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] How queued port is modelled in real platforms?

2015-07-27 Thread Andreas Hansson
Hi Prathap,

The queued port is indeed infinite, and is a convenience construct. It should 
only be used in places where there is already an inherent limit to the number 
of outstanding requests. There is an assert in the queued port to ensure things 
do not grow uncontrollably.

Andreas

From: gem5-users 
gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of 
Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com
Reply-To: gem5 users mailing list 
gem5-users@gem5.orgmailto:gem5-users@gem5.org
Date: Sunday, 26 July 2015 18:34
To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org
Subject: [gem5-users] How queued port is modelled in real platforms?

Hell Users,

Gem5 implements a queued port to interface memory objects. In my understanding 
this queued port is of infinite size. Is this specific to Gem5 implementation? 
How packets are handled in real hardware if the request rate of a layer is 
faster than the service rate of underlying layer?
It would be great if someone could help me in understanding this.

Thanks,
Prathap



-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England  Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England  Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users