Re: [gem5-users] Printing stats in ROI

2019-10-23 Thread Prathap Kolakkampadath
Why don't you use existing m5_ pseudo instructions around ROI of the benchmark. Note to compile your benchmark with the m5 library. If you are looking for more data, you may also add them in respective mem/cache file and compile the gem. Regards, Prathap On Wed, Oct 23, 2019 at 4:41 PM Victor

Re: [gem5-users] Fwd: Issue related to requests having no contextID

2017-10-10 Thread Prathap Kolakkampadath
Read requests to the memory addresses that are in the write queue and not written yet to the memory has to be responded from the write queue. I am not sure if this answer your question, but seems like this is what you are observing. Thanks, Prathap On Oct 10, 2017 2:17 AM, "Prakhar Javre"

Re: [gem5-users] ramulator

2017-10-07 Thread Prathap Kolakkampadath
Read this https://github.com/CMU-SAFARI/ramulator/blob/master/README.md Prathap On Oct 7, 2017 1:32 AM, "crown" wrote: > Hi > How to integrate ramulator with gem5? > > > > yours sincerely > > > >

Re: [gem5-users] Data_Cache

2017-09-08 Thread Prathap Kolakkampadath
m actually using SE mode, so the only process running is this small > program. > > On Fri, Sep 8, 2017 at 7:10 PM, Prathap Kolakkampadath < > kvprat...@gmail.com> wrote: > >> Could you provide more details about your system configuration? How are >> you making sure

Re: [gem5-users] Data_Cache

2017-09-08 Thread Prathap Kolakkampadath
Could you provide more details about your system configuration? How are you making sure that no other process or kernel accessing the memory. Thanks, Prathap On Sep 8, 2017 7:54 AM, "Jackie Chan" wrote: Hey guys! I'm running a small program on gem5 to test data cache.

Re: [gem5-users] gem5 and McPAT (II)

2016-03-18 Thread Prathap Kolakkampadath
Hi Marcos, Please take a look at below paper. "Micro-architectural simulation of embedded core heterogeneity with gem5 and McPAT" http://damien.courousse.fr/pdf/2015-Endo-HiPEAC-RAPIDO.pdf Hope this helps. Thanks, Prathap On Tue, Mar 15, 2016 at 6:55 PM, Andreas Hansson

[gem5-users] How to define L2 as Outer Cacheable?

2016-01-06 Thread Prathap Kolakkampadath
Hello Users, I have defined a memory type for Normal Memory allocation using NMRR register. This memory type has the Inner Cacheable property, "Non-Cacheable" and Outer Cacheable property, "WriteBack-WriteAllocate". The memory access to the memory region allocated using this memory type is

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-12 Thread Prathap Kolakkampadath
pdating it and relying on the other > constraints, but conceptually we would beed to track the start and end, not > just the end. Agreed? > > Andreas > > From: gem5-users <gem5-users-boun...@gem5.org> on behalf of Prathap > Kolakkampadath <kvprat...@gmail.com> > Reply-To: gem5

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-11 Thread Prathap Kolakkampadath
son <andreas.hans...@arm.com> > wrote: > >> Hi Prathap, >> >> Could you elaborate on why you think this line is causing problems. It >> sounds like you are suggesting this line is too restrictive? >> >> It simply enforces a minimum col-to-col timing, there

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-11 Thread Prathap Kolakkampadath
eater than the CAS-CAS delay. >> I did a fix and ran dram_sweep.py. There was absolutely no difference in the performance, which was expected. >> Presently i am not able to anticipate any other complexity. > > Andreas > > From: gem5-users <gem5-users-boun...@gem5.or

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-10 Thread Prathap Kolakkampadath
il/dram_sweep_plot.py for a > graphical “test bench” for the DRAM controller. As you will see, it never > exceeds the theoretical max. This script relies on the > configs/dram/sweep.py for the actual generation of data. > > Andreas > > From: gem5-users <gem5-users-boun...@gem5.org&g

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-10 Thread Prathap Kolakkampadath
(add a max with tCCD/tCCD_L here) ranks[j]->banks[i].colAllowedAt = std::max(cmd_at + cmd_dly,ranks[j]->banks[i].colAllowedAt) Thanks, Prathap On Tue, Nov 10, 2015 at 12:13 PM, Prathap Kolakkampadath < kvprat...@gmail.com> wrote: > Hi Andreas, > > As you said all

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-09 Thread Prathap Kolakkampadath
r, in real devices the command > bus is typically designed to _not_ be a bottleneck. Admittedly this choice > could be reassessed if needed. > > Andreas > > From: gem5-users <gem5-users-boun...@gem5.org> on behalf of Prathap > Kolakkampadath <kvprat...@gmail.com> > Reply-To

[gem5-users] Modelling command bus contention in DRAM controller

2015-11-09 Thread Prathap Kolakkampadath
Hello Users, After closely looking at the doDRAMAccess() of dram controller implementation in GEM5, i suspect that the current implementation may not be taking in to account the command bus contention that could happen if DRAM timing constraints take particular values. For example in the below

Re: [gem5-users] Modelling command bus contention in DRAM controller

2015-11-09 Thread Prathap Kolakkampadath
over, in real devices the command > bus is typically designed to _not_ be a bottleneck. Admittedly this choice > could be reassessed if needed. > > Andreas > > From: gem5-users <gem5-users-boun...@gem5.org> on behalf of Prathap > Kolakkampadath <kvprat...@gmail.com> &g

Re: [gem5-users] ARM cortex A-15 configuration

2015-11-01 Thread Prathap Kolakkampadath
me informations on [1] for Cortex A7 and A15. See [2] for >> Cortex A8 and A9. >> >> [1] http://damien.courousse.fr/pdf/2015-Endo-HiPEAC-RAPIDO.pdf >> [2] http://damien.courousse.fr/pdf/Endo2014-gem5-SAMOS.pdf >> >> >> On 19/10/2015 17:17, Prathap Kolakkampada

[gem5-users] ARM cortex A-15 configuration

2015-10-19 Thread Prathap Kolakkampadath
Hello Users, What is the exact configuration for cortex A15? The configuration file "configs/common/O3_ARM_v7a.py" doesn't seems to replicate cortex A15 correctly. For example based on the below document, cortex A15 should have ROB of size 128, which 3 times more than the size of ROB(40)

Re: [gem5-users] Sources of In-determinism in Full System Simulators

2015-08-15 Thread Prathap Kolakkampadath
regressions with UBSan to ensure there is no undefined behaviour in the simulator. I know that for X86 there are quite a few warnings from UBSan, so that could be a reason if you’re using x86. Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat

[gem5-users] Sources of In-determinism in Full System Simulators

2015-08-13 Thread Prathap Kolakkampadath
Hello User, I am running a benchmark in gem5 full system mode. Checkpoint is created in atomic mode and then switches to detailed mode before starting the benchmark. On repeated runs of the benchmark from same checkpoint, the number of memory requests arriving at DRAM banks differs; up-to 5%

Re: [gem5-users] How queued port is modelled in real platforms?

2015-07-27 Thread Prathap Kolakkampadath
to the crossbar class. In the end it is a detail/speed trade-off. If it does not matter, do not model it… Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Monday, 27 July

Re: [gem5-users] Handling write backs

2015-07-27 Thread Prathap Kolakkampadath
the specified part. If you write a whole line, then there is no need to first read. The latter behaviour is supported for whole-line write operations only. Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users

Re: [gem5-users] How queued port is modelled in real platforms?

2015-07-27 Thread Prathap Kolakkampadath
uncontrollably. Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Sunday, 26 July 2015 18:34 To: gem5 users mailing list gem5-users@gem5.org Subject: [gem5-users] How queued

[gem5-users] How queued port is modelled in real platforms?

2015-07-26 Thread Prathap Kolakkampadath
Hell Users, Gem5 implements a queued port to interface memory objects. In my understanding this queued port is of infinite size. Is this specific to Gem5 implementation? How packets are handled in real hardware if the request rate of a layer is faster than the service rate of underlying layer? It

Re: [gem5-users] Handling write backs

2015-07-21 Thread Prathap Kolakkampadath
, 2015 at 2:02 PM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hello Users, I am running a test which generate write misses to LLC. I am looking at the cache implementation code. What i understood is, write are treated as write backs; on miss, write back commands allocate a new block

Re: [gem5-users] Dynamic allocation of L1 MSHRs

2015-07-21 Thread Prathap Kolakkampadath
Hello Davesh, I did this by manipulating the isFull function as you have rightly pointed out. Thanks for the reply. Regards, Prathap On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari shingaridav...@gmail.com wrote: Hi I think you should look at the isFull function which checks whether the

Re: [gem5-users] Handling write backs

2015-07-21 Thread Prathap Kolakkampadath
On Tue, Jul 21, 2015 at 11:21 AM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hello Users, I am using classic memory system. What is the write miss policy implemented in Gem5? Looking at the code it looks like, gem5 implements *no-fetch-on-write-miss* policy; the access() inserts

[gem5-users] Handling write backs

2015-07-20 Thread Prathap Kolakkampadath
Hello Users, I am running a test which generate write misses to LLC. I am looking at the cache implementation code. What i understood is, write are treated as write backs; on miss, write back commands allocate a new block in cache and write the data into it and marks this block as dirty. When the

Re: [gem5-users] Tracking DRAM requests from a process

2015-07-20 Thread Prathap Kolakkampadath
: Polydoros Petrakis polpetras at gmail.com writes: Maybe you can check the physical memory range allocated for each process and track requests depending on the access address. (Check which range it belongs to) On 31 March 2015 at 00:30, Prathap Kolakkampadath kvprathap at gmail.com wrote

Re: [gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-16 Thread Prathap Kolakkampadath
. Do you think this hypothesis is correct? Thanks, Prathap On Thu, Jul 16, 2015 at 11:44 AM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hello Andreas, Below are the changes: @@ -1295,7 +1295,8 @@ // we have so many writes that we have to transition

Re: [gem5-users] MSHR Queue Full Handling

2015-07-15 Thread Prathap Kolakkampadath
Hello Davesh, I think it should be possible by passing the desired L1 MSHR setting for each core, while instantiating the dcache in CacheConfig.py Also look at the BaseCache constructor, to see how these parameters are being set. Thanks, Prathap ___

[gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-15 Thread Prathap Kolakkampadath
Hello Users, I have experimented by modifying the DRAM Controller write draining algorithm in such a way that, the DRAM Controller always process reads and switch to writes only when the read queue is empty; controller switch from writes to read immediately when a read arrives in the read queue.

[gem5-users] Question on retry requests due to write queue full.

2015-07-14 Thread Prathap Kolakkampadath
Hello Users, I am using classic memory system with following DRAM controller parameters write_buffer_size = 64 write_high_thresh_perc = 85 write_low_thresh_perc = 50 min_writes_per_switch =18 According to write draining algorithm, the bus has to turn around to writes when the writeQueue.size()

Re: [gem5-users] Question on retry requests due to write queue full.

2015-07-14 Thread Prathap Kolakkampadath
I think if the benchmark is write intensive, this could happen. When DRAM controller process writes, if there are many writes(cache evictions) arrives at a rate faster than the rate at which DRAM controller process writes. Thanks, Prathap On Tue, Jul 14, 2015 at 11:27 AM, Prathap Kolakkampadath

Re: [gem5-users] Question on retry requests due to write queue full.

2015-07-14 Thread Prathap Kolakkampadath
are probably arriving faster than the controller can actually send them to the DRAM. What is it you’re running? Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Tuesday

Re: [gem5-users] Suspecting bubbles in the DRAM controller command bus

2015-07-12 Thread Prathap Kolakkampadath
-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Friday, 10 July 2015 18:41 To: gem5 users mailing list gem5-users@gem5.org Subject: Re: [gem5-users] Suspecting bubbles in the DRAM controller command bus

Re: [gem5-users] Suspecting bubbles in the DRAM controller command bus

2015-07-10 Thread Prathap Kolakkampadath
-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Friday, 10 July 2015 17:11 To: gem5 users mailing list gem5-users@gem5.org Subject: Re: [gem5-users] Suspecting bubbles in the DRAM controller

Re: [gem5-users] Suspecting bubbles in the DRAM controller command bus

2015-07-10 Thread Prathap Kolakkampadath
prepare R10 without the need to precharge bank 0. Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Thursday, 9 July 2015 18:26 To: gem5 users mailing list gem5-users@gem5

[gem5-users] Suspecting bubbles in the DRAM controller command bus

2015-07-09 Thread Prathap Kolakkampadath
Hello Users, I suspect the DRAM controller code is adding unwanted bubbles in the command bus. Consider there are 10 row hit read requests- R0 and R9- in the queue, all targeting Bank0 and a Row miss request- R10 -to Bank1 of same rank and numbered in the arrival order. According to FR-FCFS in

[gem5-users] Help to understand memory trace

2015-07-01 Thread Prathap Kolakkampadath
Hello Users, I am analyzing the memory access pattern of a benchmark. For which i have connected the communication monitor between cpu and dcache and obtained the trace. The snippet of trace looks like below. w,2174471252,4,66,850031503453000 w,2174471256,4,66,850031503453250

Re: [gem5-users] Help to understand memory trace

2015-07-01 Thread Prathap Kolakkampadath
/mem/packet.hh and src/mem/request.hh Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Wednesday, 1 July 2015 11:53 To: gem5 users mailing list gem5-users@gem5.org

[gem5-users] How to model a die-stacked DRAM?

2015-06-18 Thread Prathap Kolakkampadath
Hello Users, Has anyone tried to model a die-stacked DRAM using gem5's classic memory system? I read a couple of papers, in which they model die-stacked DRAM using DRAMSim2. How difficult it would be to model and any pointers on where to start? Thanks, Prathap Kumar Valsan

Re: [gem5-users] L2 cache partitioning

2015-06-14 Thread Prathap Kolakkampadath
the appropriate functionality to look at e.g. masterId and decide on a way. I’ll try and get those patches posted in the next few days. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Monday, 8 June 2015 17:29 To: gem5 users

[gem5-users] L2 cache partitioning

2015-06-08 Thread Prathap Kolakkampadath
Dear Users, I am using ARM Full System configuration, where L2 is 8-way set associative shared Last Level Cache. I am trying to partition the L2 cache by *ways* among four cores, so that each core gets two ways. Is there a hardware support(configuration register) available to do this? If not can

[gem5-users] Dynamic allocation of L1 MSHRs

2015-05-06 Thread Prathap Kolakkampadath
Hello Users, I am simulating ARM detailed(O3) quad core CPU with private L1 cache and shared L2 cache. I am trying to regulate the number of outstanding requests a core can generate. I know that by statically changing the number of number of L1 MSHRs(passed as parameters from O3v7a.py), i can

Re: [gem5-users] Dynamic allocation of L1 MSHRs

2015-05-06 Thread Prathap Kolakkampadath
Hello Users, I understood the through CacheConfig.py, i can connect L1 caches with different MSHRs to each core. However i am not sure how to dynamically change the number of L1 MSHRs allocated to each core. Can someone shed some light on this? Thanks, Prathap

[gem5-users] Query regarding blocking cache slave port

2015-05-04 Thread Prathap Kolakkampadath
Hello All, I am simulating an ARM O3 multi-core system with private L1 cache and a Shared L2 cache. I am investigating the MSHR contention in the L2 cache. If cache has no free MSHRs, this Marks the access path of the cache as blocked and also sets the blocked flag in the slave interface.This

Re: [gem5-users] Query regarding blocking cache slave port

2015-05-04 Thread Prathap Kolakkampadath
in recvTimingReq to not just see if the layer is busy, but also check if the port asking is within budget. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Monday, 4 May 2015 22:56 To: gem5 users mailing list gem5-users@gem5.org

Re: [gem5-users] Query regarding blocking cache slave port

2015-05-04 Thread Prathap Kolakkampadath
. It would be a great contribution. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Monday, 4 May 2015 20:18 To: gem5 users mailing list gem5-users@gem5.org Subject: [gem5-users] Query regarding blocking cache slave port

Re: [gem5-users] bytesWritten (8 * number of 64-bit stores to unique addresses)

2015-04-21 Thread Prathap Kolakkampadath
Hello Patrick, Can you check the number of last level cache misses as reported by stats.text? Prathap On Apr 21, 2015 5:47 PM, Patrick plafr...@gmail.com wrote: I looked back at this, and I'm still not sure it's clear to me what is going on. I decreased the size of the write queue to 2, and

[gem5-users] Question on maximum number of outstanding DRAM memory requests that can be generated by a core.

2015-01-18 Thread Prathap Kolakkampadath via gem5-users
Hello Users, Is the maximum number of outstanding DRAM memory requests that can be generated by a core at a time is limited by number of MSHRs in its private cache? For example, In a 4 core system configuration, each core has a private L1 cache with 6 MSHRs each. The systems Last Level cache

Re: [gem5-users] DRAMCTRL:Seeing an unusual behaviuor with FR-FCFS scheduler

2014-11-14 Thread Prathap Kolakkampadath via gem5-users
Kolakkampadath via gem5-users gem5-users@gem5.org Reply-To: Prathap Kolakkampadath kvprat...@gmail.com, gem5 users mailing list gem5-users@gem5.org Date: Friday, 14 November 2014 00:11 To: gem5 users mailing list gem5-users@gem5.org Subject: [gem5-users] DRAMCTRL:Seeing an unusual behaviuor with FR-FCFS

Re: [gem5-users] DRAMCTRL:Seeing an unusual behaviuor with FR-FCFS scheduler

2014-11-14 Thread Prathap Kolakkampadath via gem5-users
would. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Friday, November 14, 2014 at 10:37 PM To: Andreas Hansson andreas.hans...@arm.com Cc: gem5 users mailing list gem5-users@gem5.org Subject: Re: [gem5-users] DRAMCTRL:Seeing an unusual behaviuor with FR-FCFS scheduler

[gem5-users] DRAMCTRL:Seeing an unusual behaviuor with FR-FCFS scheduler

2014-11-13 Thread Prathap Kolakkampadath via gem5-users
Hi Users, For the following scenario: Read0 Read1 Read2 Read3 Read4 Read5 Read6 Read7 Read8 Read9 Read10 Read11 There are 12 reads in the read queue numbered in the order of arrival. Read 0 to Read3 access same row of Bank1, Read4 access Bank0, Read5 to Read8 access same row of Bank2 and

Re: [gem5-users] DRAM memory access latency

2014-11-10 Thread Prathap Kolakkampadath via gem5-users
and should hopefully have this on the review board shortly. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Thursday, November 6, 2014 at 5:47 PM To: Andreas Hansson andreas.hans...@arm.com Cc: gem5 users mailing list gem5-users@gem5.org Subject: Re: [gem5-users] DRAM memory

Re: [gem5-users] DRAM memory access latency

2014-11-06 Thread Prathap Kolakkampadath via gem5-users
latency is two parameters that are by default also adding a few 10’s of nanoseconds. Let me know if you need more help breaking out the various components. Andreas From: Prathap Kolakkampadath via gem5-users gem5-users@gem5.org Reply-To: Prathap Kolakkampadath kvprat...@gmail.com, gem5 users

Re: [gem5-users] DRAM memory access latency

2014-11-06 Thread Prathap Kolakkampadath via gem5-users
: Prathap Kolakkampadath kvprat...@gmail.com Date: Thursday, November 6, 2014 at 5:47 PM To: Andreas Hansson andreas.hans...@arm.com Cc: gem5 users mailing list gem5-users@gem5.org Subject: Re: [gem5-users] DRAM memory access latency Hello Andreas, Thanks for your reply. Ok. I got

[gem5-users] DRAM memory access latency

2014-11-04 Thread Prathap Kolakkampadath via gem5-users
Hello Users, I am measuring DRAM worst case memory access latency(tRP+tRCD +tCL+tBURST) using a latency benchmark on arm_detailed(1Ghz) with 1MB shared L2 cache and LPDDR3 x32 DRAM. According to DRAM timing parameters, tRP = '15ns, tRCD = '15ns', tCL = '15ns', tBURST = '5ns'. Latency measured

Re: [gem5-users] DRAM memory access latency

2014-11-04 Thread Prathap Kolakkampadath via gem5-users
, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hi Toa, Amin, Thanks for your reply. To discard interbank interference and queueing delay, i have partitioned the banks so that the latency benchmark has exclusive access to a bank. Also latency benchmark is a pointer chasing benchmark

Re: [gem5-users] Questions on DRAM Controller model

2014-10-16 Thread Prathap Kolakkampadath via gem5-users
flags. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Tuesday, October 14, 2014 at 9:21 PM To: Andreas Hansson andreas.hans...@arm.com Cc: gem5 users mailing list gem5-users@gem5.org Subject: Re: [gem5-users] Questions on DRAM Controller model Hello Andreas

Re: [gem5-users] Questions on DRAM Controller model

2014-10-15 Thread Prathap Kolakkampadath via gem5-users
? There are plenty debug flags to help in drilling down on this issue. Have a look in src/cpu/o3/Sconscript for the O3 related debug flags and src/mem/cache/Sconscript for the cache flags. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Tuesday, October 14, 2014 at 9:21 PM To: Andreas

Re: [gem5-users] Questions on DRAM Controller model

2014-10-14 Thread Prathap Kolakkampadath via gem5-users
not understand why you think it would ever fill up. For “debugging” make sure that the config.ini actually captures what you think you are simulating. Also, you have a lot of DRAM-related stats in the stats.txt output. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date

[gem5-users] Unit of avg_miss_latency

2014-10-14 Thread Prathap Kolakkampadath via gem5-users
Hi Users, Below is the avg miss latency for l2 captured from stats.txt. What is the unit of this? Does this mean 230ns? system.l2.ReadReq_avg_miss_latency::cpu0.data 230466.136072 # average ReadReq miss latency Thanks, Prathap

Re: [gem5-users] Unit of avg_miss_latency

2014-10-14 Thread Prathap Kolakkampadath via gem5-users
Thanks Amin On Oct 14, 2014 8:27 PM, Amin Farmahini amin...@gmail.com wrote: pico second. Each tick is a pico second in gem5. Amin On Tue, Oct 14, 2014 at 7:53 PM, Prathap Kolakkampadath via gem5-users gem5-users@gem5.org wrote: Hi Users, Below is the avg miss latency for l2 captured

Re: [gem5-users] Questions on DRAM Controller model

2014-10-13 Thread Prathap Kolakkampadath via gem5-users
to correlate with a real platform, also see Anthony’s ISPASS paper from last year for some sensible steps in simplifying the problem and dividing it into manageable chunks. Good luck. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Monday, 13 October 2014 00:29 To: Andreas

Re: [gem5-users] Questions on DRAM Controller model

2014-10-13 Thread Prathap Kolakkampadath via gem5-users
mode is for fast-forwarding only. Once you actually want to get some representative performance numbers you have to run in timing mode with either the O3 or Minor CPU model. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Monday, 13 October 2014 10:19 To: Andreas Hansson

Re: [gem5-users] Questions on DRAM Controller model

2014-10-12 Thread Prathap Kolakkampadath via gem5-users
timings etc. It is also worth checking if the controller hardware treats writes the same way the model does (early responses, minimise switching). Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Tuesday, 9 September 2014 18:56 To: Andreas Hansson andreas.hans...@arm.com Cc

Re: [gem5-users] Questions on DRAM Controller model

2014-10-12 Thread Prathap Kolakkampadath via gem5-users
observing? Thanks, Prathap On Sun, Oct 12, 2014 at 4:56 PM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hello Andreas, Even after configuring the model like the actual hardware, i still not seeing enough interference to the read request under consideration. I am using the classic

Re: [gem5-users] Tracking DRAM read/write requests

2014-10-04 Thread Prathap Kolakkampadath via gem5-users
it is read from DRAM and respective MSHR is cleared ? Regards, Prathap On Fri, Oct 3, 2014 at 3:58 PM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hi Users, I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2

[gem5-users] Tracking DRAM read/write requests

2014-10-03 Thread Prathap Kolakkampadath via gem5-users
Hi Users, I am using an O3 4 cpu ARMv7 with DDR3_1600_x64. L1 I/Dcache size=32k and L2Cache size=1MB. #MSHRs' L1 = 10 and #MSHRs' L2 = 30.According to my understanding, this will enable each core to generate 10 outstanding memory requests. I am running a bandwidth test on all cpu's, which is

[gem5-users] Query regarding DRAM controller's FR-FCFC scheduler implementation.

2014-10-01 Thread Prathap Kolakkampadath via gem5-users
Hi Users, I am going through the FR-FCFS implementaion of gem5 DRAM Controller. When the queue.size() is greater than 1 and memSchedPolicy == Enums::frfcfs, the ChooseNext function calls reorderQueue. The reorderQueue function searches for row hits first in the queue and if there is a row hit

Re: [gem5-users] Query regarding DRAM controller's FR-FCFS scheduler implementation.

2014-10-01 Thread Prathap Kolakkampadath via gem5-users
On Wed, Oct 1, 2014 at 1:59 PM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hi Users, I am going through the FR-FCFS implementaion of gem5 DRAM Controller. When the queue.size() is greater than 1 and memSchedPolicy == Enums::frfcfs, the ChooseNext function calls reorderQueue

Re: [gem5-users] Query regarding DRAM controller's FR-FCFC scheduler implementation.

2014-10-01 Thread Prathap Kolakkampadath via gem5-users
hits. Thanks, Amin On Wed, Oct 1, 2014 at 1:59 PM, Prathap Kolakkampadath via gem5-users gem5-users@gem5.org wrote: Hi Users, I am going through the FR-FCFS implementaion of gem5 DRAM Controller. When the queue.size() is greater than 1 and memSchedPolicy == Enums::frfcfs

[gem5-users] Switching CPU type from a checkpoint fails when using memory type dramsim2

2014-09-09 Thread Prathap Kolakkampadath via gem5-users
Hello Everybody, I have created a checkpoint with cpu type 'atomic' and mem type 'dramsim2. While switching to cpu type 'detailed' from this checkpoint simulation fails with below error. Switch at curTick count:1 info: Entering event queue @ 3534903961500. Starting simulation... writing vis

Re: [gem5-users] Questions on DRAM Controller model

2014-09-09 Thread Prathap Kolakkampadath via gem5-users
, and then be fast. Both of these goals are delivered upon by the model. I hope that explains it. IF there is anything in the results you do not agree with, please do say so. Thanks, Andreas From: Prathap Kolakkampadath via gem5-users gem5-users@gem5.org Reply-To: Prathap Kolakkampadath kvprat

[gem5-users] Questions on DRAM Controller model

2014-09-08 Thread Prathap Kolakkampadath via gem5-users
Hello Everybody, I am using DDR3_1600_x64. I am trying to understand the memory controller design and have few doubts about it. 1) Do the memory controller has a separate Bank request buffer (read and write buffer) for each bank or just a global queue? 2) Is there a scheduler per bank which

Re: [gem5-users] Switching from Atomic CPU to Detailed CPU after Linux booted up

2014-09-03 Thread Prathap Kolakkampadath via gem5-users
) Restore from the checkpoint with the detailed CPU (specify the desired cpu model and also -r1 to restore from the checkpoint) On Tue, Sep 2, 2014 at 3:20 PM, Prathap Kolakkampadath via gem5-users gem5-users@gem5.org wrote: Hi Users, I am trying to run some benchmarks on ARM detailed cpu

[gem5-users] Switching from Atomic CPU to Detailed CPU after Linux booted up

2014-09-02 Thread Prathap Kolakkampadath via gem5-users
Hi Users, I am trying to run some benchmarks on ARM detailed cpu. However the simulation takes a very long time for linux to bootup and is stuck at freeing init memory and not mounting the filesystem. In case of atomic cpu, the kernel boots up to console quite fastly. I would like to know if i

Re: [gem5-users] How to add shared nonblocking L3 cache in gem5?

2014-08-28 Thread Prathap Kolakkampadath via gem5-users
and give it suitable parameters for an L3. This should be fairly straight forward and also easy to instantiate in the Python scripts (e.g. fs.py) Andreas From: Prathap Kolakkampadath via gem5-users gem5-users@gem5.org Reply-To: Prathap Kolakkampadath kvprat...@gmail.com, gem5 users

Re: [gem5-users] How to add shared nonblocking L3 cache in gem5?

2014-08-28 Thread Prathap Kolakkampadath via gem5-users
(or use split L3’s as well). If you’ve got pydot installed gem5 generates a PDF/SVG showing the system layout to visually ensure you’ve accomplished what you intended. Andreas From: Prathap Kolakkampadath kvprat...@gmail.com Date: Thursday, 28 August 2014 17:47 To: Andreas Hansson

[gem5-users] How to add shared nonblocking L3 cache in gem5?

2014-08-26 Thread Prathap Kolakkampadath via gem5-users
Hi Users, I am new to gem5 and I want to add nonblacking shared Last level cache(L3). I could see L3 cache options in Options.py with default values set. However there is no entry for L3 in Caches.py and CacheConfig.py. So extending Cache.py and CacheConfig.py would be enough to create L3

Re: [gem5-users] Integrate DRAMSim2 with gem5

2014-08-25 Thread Prathap Kolakkampadath via gem5-users
/Publications Andreas From: Debiprasanna Sahoo via gem5-users gem5-users@gem5.org Reply-To: Debiprasanna Sahoo debiprasanna.sa...@gmail.com, gem5 users mailing list gem5-users@gem5.org Date: Monday, August 25, 2014 at 5:24 AM To: Prathap Kolakkampadath kvprat...@gmail.com, gem5 users mailing list

[gem5-users] Integrate DRAMSim2 with gem5

2014-08-24 Thread Prathap Kolakkampadath via gem5-users
Hi Users, Has anyone successfully integrated DRAMSim2 with gem5? If so please point me to the patch and the version of gem5 used. Thanks, Prathap ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users