Re: [gem5-users] Dynamic allocation of L1 MSHRs
Hi I think you should look at the isFull function which checks whether the MSHR queue is full or not. You can check if it is a miss request and can allocate the size of the mshr queue per core dynamically. ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Dynamic allocation of L1 MSHRs
Hello Davesh, I did this by manipulating the isFull function as you have rightly pointed out. Thanks for the reply. Regards, Prathap On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari shingaridav...@gmail.com wrote: Hi I think you should look at the isFull function which checks whether the MSHR queue is full or not. You can check if it is a miss request and can allocate the size of the mshr queue per core dynamically. ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Dynamic allocation of L1 MSHRs
Hi Prathap I have one doubt though. Even if we change statically the number of mshrs in Caches.py (for all cores) or in CacheConfig.py (for individual cores), how to confirm the updated MSHR value. When I look at config.ini, then I see following: *[system.cpu0.dcache]* demand_mshr_reserve=1 mshrs=6 tgts_per_mshr=8 *[system.cpu0.icache]* demand_mshr_reserve=1 mshrs=2 tgts_per_mshr=8 But in Caches.py, configuration is: class L1Cache(BaseCache): assoc = 2 hit_latency = 2 response_latency = 2 *mshrs *= 4 * tgts_per_mshr* = 20 is_top_level = True So from where does it get those values? ᐧ On Tue, Jul 21, 2015 at 1:46 PM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hello Davesh, I did this by manipulating the isFull function as you have rightly pointed out. Thanks for the reply. Regards, Prathap On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari shingaridav...@gmail.com wrote: Hi I think you should look at the isFull function which checks whether the MSHR queue is full or not. You can check if it is a miss request and can allocate the size of the mshr queue per core dynamically. ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users -- Have a great day! Thanks and Warm Regards Davesh Shingari Master's in Computer Engineering [EE] Arizona State University davesh.shing...@asu.edu ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Dynamic allocation of L1 MSHRs
Hello Users, I am simulating ARM detailed(O3) quad core CPU with private L1 cache and shared L2 cache. I am trying to regulate the number of outstanding requests a core can generate. I know that by statically changing the number of number of L1 MSHRs(passed as parameters from O3v7a.py), i can restrict the number of outstanding requests of a core. I would like to have private cache with different number of L1 MSHRs for each core(for eg: core0 - 1MSHR, Core2 - 3MSHRs ..etc). How to make this assignment through configuration file? Also i would like to dynamically change this allocation. Can i make use of m5(special instruction) to do this? Can anyone shed some light on this. Thanks, Prathap ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Dynamic allocation of L1 MSHRs
Hello Users, I understood the through CacheConfig.py, i can connect L1 caches with different MSHRs to each core. However i am not sure how to dynamically change the number of L1 MSHRs allocated to each core. Can someone shed some light on this? Thanks, Prathap ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users