Re: [gem5-users] Dynamic allocation of L1 MSHRs

2015-07-21 Thread Prathap Kolakkampadath
which cpu-type are you using?
For arm_detailed, look at O3_ARM_v7a.py

On Tue, Jul 21, 2015 at 5:57 PM, Davesh Shingari 
wrote:

> Hi Prathap
>
> I have one doubt though.
> Even if we change statically the number of mshrs in Caches.py (for all
> cores) or in CacheConfig.py (for individual cores), how to confirm the
> updated MSHR value. When I look at config.ini, then I see following:
>
> *[system.cpu0.dcache]*
> demand_mshr_reserve=1
> mshrs=6
> tgts_per_mshr=8
>
> *[system.cpu0.icache]*
> demand_mshr_reserve=1
> mshrs=2
> tgts_per_mshr=8
>
> But in Caches.py, configuration is:
>
> class L1Cache(BaseCache):
> assoc = 2
> hit_latency = 2
> response_latency = 2
> *mshrs *= 4
>* tgts_per_mshr* = 20
> is_top_level = True
>
>
> So from where does it get those values?
> ᐧ
>
> On Tue, Jul 21, 2015 at 1:46 PM, Prathap Kolakkampadath <
> kvprat...@gmail.com> wrote:
>
>> Hello Davesh,
>>
>> I did this by manipulating the isFull function as you have rightly
>> pointed out.
>> Thanks for the reply.
>>
>> Regards,
>> Prathap
>>
>> On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari <
>> shingaridav...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I think you should look at the isFull function which checks whether the
>>> MSHR queue is full or not. You can check if it is a miss request and can
>>> allocate the size of the mshr queue per core dynamically.
>>>
>>> ___
>>> gem5-users mailing list
>>> gem5-users@gem5.org
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>>
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
>
> --
> Have a great day!
>
> Thanks and Warm Regards
> Davesh Shingari
> Master's in Computer Engineering [EE]
> Arizona State University
>
> davesh.shing...@asu.edu
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Dynamic allocation of L1 MSHRs

2015-07-21 Thread Davesh Shingari
Hi Prathap

I have one doubt though.
Even if we change statically the number of mshrs in Caches.py (for all
cores) or in CacheConfig.py (for individual cores), how to confirm the
updated MSHR value. When I look at config.ini, then I see following:

*[system.cpu0.dcache]*
demand_mshr_reserve=1
mshrs=6
tgts_per_mshr=8

*[system.cpu0.icache]*
demand_mshr_reserve=1
mshrs=2
tgts_per_mshr=8

But in Caches.py, configuration is:

class L1Cache(BaseCache):
assoc = 2
hit_latency = 2
response_latency = 2
*mshrs *= 4
   * tgts_per_mshr* = 20
is_top_level = True


So from where does it get those values?
ᐧ

On Tue, Jul 21, 2015 at 1:46 PM, Prathap Kolakkampadath  wrote:

> Hello Davesh,
>
> I did this by manipulating the isFull function as you have rightly pointed
> out.
> Thanks for the reply.
>
> Regards,
> Prathap
>
> On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari  > wrote:
>
>> Hi
>>
>> I think you should look at the isFull function which checks whether the
>> MSHR queue is full or not. You can check if it is a miss request and can
>> allocate the size of the mshr queue per core dynamically.
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

davesh.shing...@asu.edu
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Handling write backs

2015-07-21 Thread Prathap Kolakkampadath
Hello Users,

I figured out that the Gem5 implements fetch-on-write-miss policy.
On a write miss, allocateMissBuffer()  is called to allocate an MSHR ;
which send the timing request to bring this cache line.
Once the request is ready, in the response path, handleFill() is called,
which is responsible to insert the block in to the cache. While inserting,
if the replacing  victim block is dirty;a write back packet is generated
and is copied to write buffers.
After which satisfyCpuSideRequest() is called to write the data to the
newly assigned block and marks it as dirty.

Thanks,
Prathap






On Tue, Jul 21, 2015 at 11:21 AM, Prathap Kolakkampadath <
kvprat...@gmail.com> wrote:

> Hello Users,
>
> I am using classic memory system. What is the write miss policy
> implemented in Gem5?
> Looking at the code it looks like, gem5 implements
> *no-fetch-on-write-miss* policy; the access() inserts a block in cache,
> when the request is writeback and it misses the cache.
> However, when i run a test with bunch of write misses, i see equal number
> of reads and writes to DRAM memory. This could happen if the policy is
> *fetch-on-write-miss.* So far i couldn't figure this out. It would be
> great if someone can throw some pointers to understand this further.
>
> Thanks,
> Prathap
>
> On Mon, Jul 20, 2015 at 2:02 PM, Prathap Kolakkampadath <
> kvprat...@gmail.com> wrote:
>
>> Hello Users,
>>
>> I am running a test which generate write misses to LLC. I am looking at
>> the cache implementation code. What i understood is, write are treated as
>> write backs; on miss, write back commands allocate a new block in cache and
>> write the data into it and marks this block as dirty. When the dirty blocks
>> are replaced,these will be written in to write buffers.
>>
>> I have following questions on this:
>> 1) When i run the test which generates write misses, i see same number
>> of reads from memory as the number of writes. Does this mean, write backs
>> also fetches the cache-line from main memory?
>>
>> 2) When the blocks in write buffers will be  written to memory? Is it
>> written when the write buffers are full?
>>
>> It would be great if someone can help me in understanding this.
>>
>>
>> Thanks,
>> Prathap
>>
>>
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Dynamic allocation of L1 MSHRs

2015-07-21 Thread Prathap Kolakkampadath
Hello Davesh,

I did this by manipulating the isFull function as you have rightly pointed
out.
Thanks for the reply.

Regards,
Prathap

On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari 
wrote:

> Hi
>
> I think you should look at the isFull function which checks whether the
> MSHR queue is full or not. You can check if it is a miss request and can
> allocate the size of the mshr queue per core dynamically.
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Dynamic allocation of L1 MSHRs

2015-07-21 Thread Davesh Shingari
Hi

I think you should look at the isFull function which checks whether the 
MSHR queue is full or not. You can check if it is a miss request and can 
allocate the size of the mshr queue per core dynamically.

___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Handling write backs

2015-07-21 Thread Prathap Kolakkampadath
Hello Users,

I am using classic memory system. What is the write miss policy implemented
in Gem5?
Looking at the code it looks like, gem5 implements *no-fetch-on-write-miss*
policy; the access() inserts a block in cache, when the request is
writeback and it misses the cache.
However, when i run a test with bunch of write misses, i see equal number
of reads and writes to DRAM memory. This could happen if the policy is
*fetch-on-write-miss.* So far i couldn't figure this out. It would be great
if someone can throw some pointers to understand this further.

Thanks,
Prathap

On Mon, Jul 20, 2015 at 2:02 PM, Prathap Kolakkampadath  wrote:

> Hello Users,
>
> I am running a test which generate write misses to LLC. I am looking at
> the cache implementation code. What i understood is, write are treated as
> write backs; on miss, write back commands allocate a new block in cache and
> write the data into it and marks this block as dirty. When the dirty blocks
> are replaced,these will be written in to write buffers.
>
> I have following questions on this:
> 1) When i run the test which generates write misses, i see same number  of
> reads from memory as the number of writes. Does this mean, write backs also
> fetches the cache-line from main memory?
>
> 2) When the blocks in write buffers will be  written to memory? Is it
> written when the write buffers are full?
>
> It would be great if someone can help me in understanding this.
>
>
> Thanks,
> Prathap
>
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users