Hi Nilay,

At one point in time, the combination of several letters at the beginning of 
the action name corresponded to the short hand name for the action.  The short 
hand name is the letter or letter combination that appears in the HTML tables.  
SLICC may have once enforced that the combination of letters matched the HTML 
short hand name, but I don't believe it does right now.  Therefore the letters 
are just a guide to match actions with their associated short hand name.  And 
yes, you can use any combination.

Brad


-----Original Message-----
From: Nilay Vaish [mailto:ni...@cs.wisc.edu] 
Sent: Tuesday, January 04, 2011 12:03 PM
To: Beckmann, Brad
Cc: Default
Subject: RE: Review Request: Changing how CacheMemory interfaces with SLICC

Brad

Is there a reason why each action name follows the pattern <combination of 
several letters>_<action performed by the action>? The letters used are not 
abbreviations of the action performed. Can we use any combination?

Thanks
Nilay

On Tue, 4 Jan 2011, Beckmann, Brad wrote:

> Hi Nilay,
>
> My responses are below:
>
>> The main thing I would like to see improved is not having to 
>> differentiate between “entry” and “entry_ptr” in the .sm 
>> files.  Am I correct that the only functions in the .sm files that 
>> are passed an “entry_ptr” are “is_valid_ptr”, 
>> “getCacheEntry”, and “set_cache_entry”?  If so, it seems that 
>> all three functions are generated with unique python code, either in 
>> an AST file or StateMachine.py.  Therefore, could we just pass these 
>> functions “entry” and rely on the underneath python code to 
>> generate the correct references?  This would make things more 
>> readable, “is_valid_ptr()” becomes “is_valid”, and it 
>> doesn’t require the slicc programmer to understand which functions take an 
>> entry pointer versus the entry itself.
>> If we can’t make such a change, I worry about how much extra 
>> complexity this change pushes on the slicc programmer.
>
> There are functions that are passed cache entry and transaction buffer entry 
> as arguments. Currently, I assume that these arguments are passed using 
> pointers.
>
> [BB] So does that mean that the cache entry is always passed in as a pointer? 
>  If so, can one just use "cache_entry" for all function calls and remove any 
> use of "cache_entry_ptr" in the .sm files?  That is essentially what I would 
> like to see.
>
>>
>> Also another suggestion to make things more readable, please replace 
>> the name L1IcacheMemory_entry with L1I_entry.  Do the same for 
>> L1D_entry and L2_entry.  That will shorten many of your lines.
>
> The names of the cache entry variables are currently tied with the names of 
> the cache memory variables belonging to the machine. If the name of the cache 
> memory variable is A, then the corresponding cache entry variable is named 
> A_entry.
>
> [BB] Ah, I see.  Ok then let's just keep them the way they are for now.  We 
> can deal with shorting the names later.
>
>> So am I correct that hammer’s simultaneous usage of valid L1 and L2 
>> cache entries in certain transitions is the only reason that within 
>> all actions, the getCacheEntry calls take multiple cache entries?  If 
>> so, I think it would be fairly trivial to use a tbe entry as an 
>> intermediary between the L1 and L2 for those particular hammer 
>> transitions.  That way only one cache entry is valid at any 
>> particular time, and we can simply use the variable cache_entry in 
>> the actions.  That should clean things up a lot.
>
> Oops! Should have thought of that before doing all those changes. But can we 
> assume that we would always have only one valid cache entry pointer at any 
> given time? If that's true, I would probably revert to previous version of 
> the patch. This should also resolve the naming issue.
>
> [BB] I wouldn't have expected you to realize that.  It is one of those things 
> that isn't completely obvious without spending a lot of time developing 
> protocols.  Yes, I think it is easiest for you to just revert to the previous 
> version of the patch and just modify the hammer protocol to use a tbe entry 
> as an intermediary.  We've always had an unofficial rule that a controller 
> can only manage multiple caches if those caches are exclusive with respect to 
> each other.  For the most part, that rule has been followed by all the 
> protocols I'm familiar with.  I think your change just makes that an official 
> policy.
>
>> By the way, once you check in these patches, the MESI_CMP_directory 
>> protocol will be deprecated, correct?  If so, make sure you include a 
>> patch that removes it from the regression tester.
>
> I have a patch for the protocol, but I need to discuss it. Do you 
> think it is possible that a protocol is not in a dead lock but random 
> tester outputs so because the underlying memory system is taking too 
> much time? The patch works for 1, 2, and 4 processors for 10,000,000 
> loads. I have tested these processor configurations with 40 different 
> seed values. But for 8 processors, random tester outputs some thing 
> like this --
>
> panic: Possible Deadlock detected. Aborting!
> version: 6 request.paddr: 12779 m_writeRequestTable: 15 current time: 
> 369500011 issue_time: 368993771 difference: 506240 @ cycle 369500011 
> [wakeup:build/ALPHA_SE_MESI_CMP_directory/mem/ruby/system/Sequencer.cc
> , line 123]
>
> [BB] Yes, the current version of MESI_CMP_directory is broken in many places. 
>  Arka just told me that he recently fixed many of those problems.  I suggest 
> getting his fixes and working from there.
>
> Brad
>
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to