Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-20 Thread Radhika Jagtap

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8425
---



src/mem/cache/tags/base.cc (line 59)


more than 80 chars in the line



src/mem/cache/tags/base_set_assoc.hh (line 220)


Bad white spaces around here.



src/mem/cache/tags/fa_lru.cc (lines 214 - 223)


Indentation is is off.


- Radhika Jagtap


On June 16, 2016, 6:55 p.m., Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated June 16, 2016, 6:55 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
>   src/mem/cache/tags/base.cc 80e79ae636ca 
>   src/mem/cache/tags/Tags.py 80e79ae636ca 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
>   src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
>   src/mem/cache/tags/base.hh 80e79ae636ca 
>   configs/common/Caches.py 80e79ae636ca 
>   src/mem/cache/Cache.py 80e79ae636ca 
>   src/mem/cache/base.hh 80e79ae636ca 
>   src/mem/cache/base.cc 80e79ae636ca 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-20 Thread Radhika Jagtap


> On June 17, 2016, 7:57 a.m., Pierre-Yves Péneau wrote:
> > I don't like the variable names, I think it's confusing especially in the 
> > Python part which is the user part. "lookup_latency"  does not clearly 
> > refer to the tag lookup action , and "ram_latency" is also not very clear. 
> > Maybe something like "tag_latency" and "line_latency" could be better ? I 
> > think the two parts of a cache are well identified in this example.
> 
> Sophiane SENNI wrote:
> Hi Pierre-Yves,
> 
> I am agree with you that the variable names in the Python part should not 
> be confusing for users. I reused the name from a previous discussion with 
> Andreas H.
> We need feedback from other users to see what are the best annotation. In 
> Cache, there are tag arrays and data arrays, so maybe "tag_latency" and 
> "data_line_latency" could be a solution.
> Any feedback from other gem5 users would be useful.
> 
> Sophiane

Thanks for bringing this up. I vote for 'tag_latency' (or 'tag_lookup_latency') 
and 'data_latency'.

If I understand correctly the patch has an impact on timing/stats only if 
sequential access is set to True and in that case only affects the hit latency. 
The timing on the miss path and allocation of mshr (mshr entry, mshr target, 
write buffer entry, ask for mem-side bus access) still uses the forwardLatency 
value. The forwardLatency used to be 'hit_latency' (at one point not so far in 
the past everything was 'hit_latency' anyway!). But this change makes a 
distinction between tag and data access and it is logical to make forward 
latency equal to tag_latency. If you also had this analysis in mind, please 
could you add a comment for forwardLatency somewhere?


- Radhika


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
---


On June 16, 2016, 6:55 p.m., Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated June 16, 2016, 6:55 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
>   src/mem/cache/tags/base.cc 80e79ae636ca 
>   src/mem/cache/tags/Tags.py 80e79ae636ca 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
>   src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
>   src/mem/cache/tags/base.hh 80e79ae636ca 
>   configs/common/Caches.py 80e79ae636ca 
>   src/mem/cache/Cache.py 80e79ae636ca 
>   src/mem/cache/base.hh 80e79ae636ca 
>   src/mem/cache/base.cc 80e79ae636ca 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-17 Thread Sophiane SENNI


> On juin 17, 2016, 7:57 matin, Pierre-Yves Péneau wrote:
> > I don't like the variable names, I think it's confusing especially in the 
> > Python part which is the user part. "lookup_latency"  does not clearly 
> > refer to the tag lookup action , and "ram_latency" is also not very clear. 
> > Maybe something like "tag_latency" and "line_latency" could be better ? I 
> > think the two parts of a cache are well identified in this example.

Hi Pierre-Yves,

I am agree with you that the variable names in the Python part should not be 
confusing for users. I reused the name from a previous discussion with Andreas 
H.
We need feedback from other users to see what are the best annotation. In 
Cache, there are tag arrays and data arrays, so maybe "tag_latency" and 
"data_line_latency" could be a solution.
Any feedback from other gem5 users would be useful.

Sophiane


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
---


On juin 16, 2016, 6:55 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 6:55 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
>   src/mem/cache/tags/base.cc 80e79ae636ca 
>   src/mem/cache/tags/Tags.py 80e79ae636ca 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
>   src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
>   src/mem/cache/tags/base.hh 80e79ae636ca 
>   configs/common/Caches.py 80e79ae636ca 
>   src/mem/cache/Cache.py 80e79ae636ca 
>   src/mem/cache/base.hh 80e79ae636ca 
>   src/mem/cache/base.cc 80e79ae636ca 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-17 Thread Pierre-Yves Péneau

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
---


I don't like the variable names, I think it's confusing especially in the 
Python part which is the user part. "lookup_latency"  does not clearly refer to 
the tag lookup action , and "ram_latency" is also not very clear. Maybe 
something like "tag_latency" and "line_latency" could be better ? I think the 
two parts of a cache are well identified in this example.

- Pierre-Yves Péneau


On June 16, 2016, 8:55 p.m., Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated June 16, 2016, 8:55 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
>   src/mem/cache/tags/base.cc 80e79ae636ca 
>   src/mem/cache/tags/Tags.py 80e79ae636ca 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
>   src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
>   src/mem/cache/tags/base.hh 80e79ae636ca 
>   configs/common/Caches.py 80e79ae636ca 
>   src/mem/cache/Cache.py 80e79ae636ca 
>   src/mem/cache/base.hh 80e79ae636ca 
>   src/mem/cache/base.cc 80e79ae636ca 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 16, 2016, 6:55 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 11536:1a3a96d435ed
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
  src/mem/cache/tags/base.cc 80e79ae636ca 
  src/mem/cache/tags/Tags.py 80e79ae636ca 
  src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
  src/mem/cache/tags/base.hh 80e79ae636ca 
  configs/common/Caches.py 80e79ae636ca 
  src/mem/cache/Cache.py 80e79ae636ca 
  src/mem/cache/base.hh 80e79ae636ca 
  src/mem/cache/base.cc 80e79ae636ca 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI


> On juin 16, 2016, 2:37 après-midi, Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> If I use the hg postreview extension (with the following command hg 
> postreview -o -u -e 3502), all the patch does not apply cleanly.
> 
> Jason Lowe-Power wrote:
> Make sure you're applying your patch on top of the most recent version of 
> gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.
> 
> For instance, I believe BaseCache.py was renamed Cache.py in the last few 
> months (I don't remember exactly when).
> 
> Sophiane SENNI wrote:
> For the current posted patch, I used the following command "hg diff -g", 
> then I post the patch manually through reviewboard GUI. But using this method 
> does not also work properly. As you noticed, some of the patch doesn't apply 
> cleanly.

You are right, the patch was applied to gem5-stable. I will apply it to the 
most recent version. 
Thanks.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


On juin 16, 2016, 3:27 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 3:27 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc UNKNOWN 
>   src/mem/cache/tags/fa_lru.hh UNKNOWN 
>   src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
>   src/mem/cache/tags/base.cc UNKNOWN 
>   src/mem/cache/tags/base.hh UNKNOWN 
>   src/mem/cache/tags/Tags.py UNKNOWN 
>   src/mem/cache/base.hh UNKNOWN 
>   src/mem/cache/base.cc UNKNOWN 
>   src/mem/cache/BaseCache.py UNKNOWN 
>   configs/common/Caches.py UNKNOWN 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI


> On juin 16, 2016, 2:37 après-midi, Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> If I use the hg postreview extension (with the following command hg 
> postreview -o -u -e 3502), all the patch does not apply cleanly.
> 
> Jason Lowe-Power wrote:
> Make sure you're applying your patch on top of the most recent version of 
> gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.
> 
> For instance, I believe BaseCache.py was renamed Cache.py in the last few 
> months (I don't remember exactly when).

For the current posted patch, I used the following command "hg diff -g", then I 
post the patch manually through reviewboard GUI. But using this method does not 
also work properly. As you noticed, some of the patch doesn't apply cleanly.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


On juin 16, 2016, 3:27 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 3:27 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc UNKNOWN 
>   src/mem/cache/tags/fa_lru.hh UNKNOWN 
>   src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
>   src/mem/cache/tags/base.cc UNKNOWN 
>   src/mem/cache/tags/base.hh UNKNOWN 
>   src/mem/cache/tags/Tags.py UNKNOWN 
>   src/mem/cache/base.hh UNKNOWN 
>   src/mem/cache/base.cc UNKNOWN 
>   src/mem/cache/BaseCache.py UNKNOWN 
>   configs/common/Caches.py UNKNOWN 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Jason Lowe-Power


> On June 16, 2016, 2:37 p.m., Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> If I use the hg postreview extension (with the following command hg 
> postreview -o -u -e 3502), all the patch does not apply cleanly.

Make sure you're applying your patch on top of the most recent version of gem5 
(NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.

For instance, I believe BaseCache.py was renamed Cache.py in the last few 
months (I don't remember exactly when).


- Jason


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


On June 16, 2016, 3:27 p.m., Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated June 16, 2016, 3:27 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc UNKNOWN 
>   src/mem/cache/tags/fa_lru.hh UNKNOWN 
>   src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
>   src/mem/cache/tags/base.cc UNKNOWN 
>   src/mem/cache/tags/base.hh UNKNOWN 
>   src/mem/cache/tags/Tags.py UNKNOWN 
>   src/mem/cache/base.hh UNKNOWN 
>   src/mem/cache/base.cc UNKNOWN 
>   src/mem/cache/BaseCache.py UNKNOWN 
>   configs/common/Caches.py UNKNOWN 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 16, 2016, 3:27 après-midi)


Review request for Default.


Repository: gem5


Description
---

Changeset 10875:dd94e2606640
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  src/mem/cache/tags/fa_lru.cc UNKNOWN 
  src/mem/cache/tags/fa_lru.hh UNKNOWN 
  src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
  src/mem/cache/tags/base.cc UNKNOWN 
  src/mem/cache/tags/base.hh UNKNOWN 
  src/mem/cache/tags/Tags.py UNKNOWN 
  src/mem/cache/base.hh UNKNOWN 
  src/mem/cache/base.cc UNKNOWN 
  src/mem/cache/BaseCache.py UNKNOWN 
  configs/common/Caches.py UNKNOWN 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI


> On juin 16, 2016, 2:37 après-midi, Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!

Hi Jason,

If I use the hg postreview extension (with the following command hg postreview 
-o -u -e 3502), all the patch does not apply cleanly.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


On juin 16, 2016, 3:15 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 3:15 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/BaseCache.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/Tags.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/base_set_assoc.hh 
> 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/fa_lru.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/fa_lru.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 16, 2016, 3:15 p.m.)


Review request for Default.


Repository: gem5


Description
---

Changeset 10875:dd94e2606640
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  configs/common/Caches.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/BaseCache.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/Tags.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 16, 2016, 3:14 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 10875:dd94e2606640
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Jason Lowe-Power

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


Hi Sophiane,

Thanks for the contribution. It looks like some of the patch doesn't apply 
cleanly in reviewboard. Did you use the hg postreview extension? It may also 
help to use the "-o" option on the extension.

Cheers!

- Jason Lowe-Power


On June 16, 2016, 1:37 p.m., Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated June 16, 2016, 1:37 p.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:b498767cb7d8
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py UNKNOWN 
>   src/mem/cache/BaseCache.py UNKNOWN 
>   src/mem/cache/base.hh UNKNOWN 
>   src/mem/cache/base.cc UNKNOWN 
>   src/mem/cache/tags/Tags.py UNKNOWN 
>   src/mem/cache/tags/base.hh UNKNOWN 
>   src/mem/cache/tags/base.cc UNKNOWN 
>   src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
>   src/mem/cache/tags/fa_lru.hh UNKNOWN 
>   src/mem/cache/tags/fa_lru.cc UNKNOWN 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 16, 2016, 1:37 après-midi)


Review request for Default.


Repository: gem5


Description
---

Changeset 10875:b498767cb7d8
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  configs/common/Caches.py UNKNOWN 
  src/mem/cache/BaseCache.py UNKNOWN 
  src/mem/cache/base.hh UNKNOWN 
  src/mem/cache/base.cc UNKNOWN 
  src/mem/cache/tags/Tags.py UNKNOWN 
  src/mem/cache/tags/base.hh UNKNOWN 
  src/mem/cache/tags/base.cc UNKNOWN 
  src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
  src/mem/cache/tags/fa_lru.hh UNKNOWN 
  src/mem/cache/tags/fa_lru.cc UNKNOWN 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-15 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 15, 2016, 2:43 après-midi)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 10875:b498767cb7d8
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-15 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 15, 2016, 2:34 p.m.)


Review request for Default.


Summary (updated)
-

cache: Split the hit latency into tag lookup latency and RAM access latency


Repository: gem5


Description (updated)
---

Changeset 10875:b498767cb7d8
---
cache: Split the hit latency into tag lookup latency and RAM access latency


Diffs (updated)
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev