Thanks Andreas.

Yes, I can have a look.

What do you mean by "It is doing an ok job at a very specific timing
behaviour" ? Can you explain, please ? Maybe through an example ? So, I
can understand what is (and is not) taken into account when accessing cache.

Regarding the sequential access, and as a first change, I think the
cache access latency needs to be splitted in tag lookup and RAM latency.

Cordialement / Best Regards

SENNI Sophiane
Ph.D.
CNRS / LIRMM (www.lirmm.fr)

Le 03/06/2016 à 09:36, Andreas Hansson a écrit :
> Hi Sophiane,
>
> I think the bottom-line is that the classic cache needs a bit of a
> timing revamp. It is doing an ok job at a very specific timing
> behaviour, but it doesn’t quite respond as you want to the various
> parameters.
>
> It would be great if you could re-do the timing annotation, and also
> expose the more detailed C++ parameters in the python wrapper. For the
> various flows through the cache we’d then have to make sure that the
> right delay components are added  or maxed  together (tag lookup, RAM
> access, pipeline latency). Do you think you could have a look and post
> a patch?
>
> Thanks,
>
> Andreas
>
> From: senni sophiane <[email protected]
> <mailto:[email protected]>>
> Date: Thursday, 2 June 2016 at 14:37
> To: gem5 users mailing list <[email protected]
> <mailto:[email protected]>>, Andreas Hansson
> <[email protected] <mailto:[email protected]>>
> Subject: Re: Parallel and Sequential access for cache
>
> Hi,
>
> Maybe my question was not clear. A simpler question, does gem5
> consider the same cache access latency (tag access + data block
> access) for both parallel and sequential modes ?
>
> If not, where this difference is taken into account. Does someone have
> a part of the answer ?
>
> Thanks
>
> Cordialement / Best Regards
>
> SENNI Sophiane
> Ph.D.
> CNRS / LIRMM (www.lirmm.fr)
> Le 31/05/2016 à 15:01, senni sophiane a écrit :
>>
>> Hi Andreas,
>>
>> I have a question regarding the cache access mode for cache. I saw
>> the cache can be accessed either in parallel (tag and data arrays
>> accessed in parallel) or sequentially (tags accessed in parallel,
>> only one data array (or block) is accessed on a hit).
>>
>> By reading the "src/mem/cache/tags/base_set_assoc.hh" file, I noticed
>> that the number of data blocks accessed is different depending on the
>> cache access mode. However, I did not see where the difference in
>> terms of latency is taken into account. It seems that for both modes,
>> the cache access latency coresponds to the "hit_latency" parameter,
>> isn't it ?
>>
>> If so, I am not sure what does the "hit_latency" parameter represent
>> ? Does the "hit_latency" correspond to the tag lookup latency ? Or
>> does it represent the complete access cache (tag lookup latency +
>> latency to read out the data block) ?
>>
>> As far as I know, sequential access is slower than parallel access.
>>
>>
>> Thanks for your help.
>>
>> -- 
>> Cordialement / Best Regards
>>
>> SENNI Sophiane
>
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose
> the contents to any other person, use it for any purpose, or store or
> copy the information in any medium. Thank you. 

_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to