Greetings,
After lots of debugging, here is a not-so-big patch that fixes the errors in
the shared L2 cache. I believe this is working correctly as I've run several
non-trivial workloads to completion, but the simulations have been
non-deterministic, so I can't say with utter certainty, but I'm relatively
certain this works.
Most of the issues stemmed from some assumptions made about which level of
cache is the last level before the main memory, so perhaps a new flag along
the lines of isLowestBeforeMainMemory_ would be useful.
Let me know if anything looks off in it.
-Paul
On Mon, Aug 23, 2010 at 6:13 PM, avadh patel <[email protected]> wrote:
> Hi Paul,
>
> Did you figure out the issue why the cache queue is filled up and not
> emptied?
>
> Thanks,
> Avadh
>
> On Mon, Aug 16, 2010 at 11:43 PM, DRAM Ninjas <[email protected]>wrote:
>
>> So after trying to figure out what is going on, I have a theory as to why
>> this is failing.
>>
>> It all comes down to this case:
>>
>> 793 * Message contains a valid argument that has
>> 794 * the state of the line from lower cache
>> 795 */
>> 796 queueEntry->line->state = *((MESICacheLineState*)
>> message.arg);
>>
>> In the shared L2 case the "lower cache" is the memory which doesn't send a
>> message.arg since the line should just be marked exclusive.
>>
>> I guess a short term fix would be to just check if message.arg == NULL,
>> then set the line to MESI_EXCLUSIVE. Really, the check should be more along
>> the lines of
>>
>> #ifdef ENABLE_L3_CACHE
>> if (type_ == L3_CACHE)
>> #else
>> if (type_ == L2_CACHE)
>> #endif
>>
>>
>> Now, while this fixes the seg fault it also stalls out the pipeline with
>> thousands of:
>>
>> Adding event:Event< Signal:Bus_broadcast Clock:16882 arg:0x1e75410>
>> Executing event: Event< Signal:Bus_broadcast Clock:16882 arg:0x1e75410>
>> Bus cant do addr broadcast, pending queue full
>>
>> I will try to look at this tomorrow -- maybe finishing the broadcast isn't
>> clearing out the pendingQueue entry for some reason...?
>>
>> -Paul
>>
>> On Thu, Aug 12, 2010 at 10:32 PM, DRAM Ninjas <[email protected]>wrote:
>>
>>> The cache questions continue.
>>>
>>> I just tried to compile the current master branch and set up a shared L2
>>> cache configuration with the following simconfig file:
>>>
>>> -stats run0.stats
>>>
>>> -logfile run0.log
>>> -corefreq 2000000000
>>> -cache-config-type shared_L2
>>> -cores-per-L2 2
>>>
>>>
>>> built using 'scons c=2'
>>>
>>> I get this backtrace from GDB almost immediately:
>>>
>>> #0 0x00000000005cdbe1 in
>>> Memory::MESICache::CacheController::complete_request (this=0x260a210,
>>> message=..., queueEntry=0x260a288) at
>>> ptlsim/build/cache/mesiCache.cpp:797
>>> #1 0x00000000005d3035 in
>>> Memory::MESICache::CacheController::handle_lower_interconnect (
>>> this=0x260a210, message=...) at ptlsim/build/cache/mesiCache.cpp:349
>>> #2 0x00000000005d325d in
>>> Memory::MESICache::CacheController::handle_interconnect_cb (
>>> this=0x260a210, arg=0x256f6b0) at
>>> ptlsim/build/cache/mesiCache.cpp:835
>>> #3 0x00000000005d5178 in Memory::P2PInterconnect::controller_request_cb
>>> (this=0x1e9ba80,
>>> arg=<value optimized out>) at ptlsim/build/cache/p2p.cpp:82
>>> #4 0x00000000005c25a8 in Memory::MemoryController::wait_interconnect_cb
>>> (this=0x1ea5010,
>>> arg=0x1ea5088) at ptlsim/build/cache/memoryController.cpp:238
>>> #5 0x00000000005c29db in Memory::MemoryController::access_completed_cb
>>> (this=0x1ea5010,
>>> arg=<value optimized out>) at
>>> ptlsim/build/cache/memoryController.cpp:206
>>> #6 0x00000000005c3b1c in Memory::Event::execute (this=0x2543090)
>>> at ptlsim/cache/memoryHierarchy.h:109
>>> #7 Memory::MemoryHierarchy::clock (this=0x2543090) at
>>> ptlsim/build/cache/memoryHierarchy.cpp:380
>>> #8 0x00000000005ec37d in OutOfOrderModel::OutOfOrderMachine::run
>>> (this=0x162c2c0, config=...)
>>> at ptlsim/build/core/ooocore.cpp:2034
>>> #9 0x000000000063b476 in ptl_simulate () at
>>> ptlsim/build/sim/ptlsim.cpp:1092
>>> #10 0x00000000005a7e56 in sim_cpu_exec () at qemu/cpu-exec.c:302
>>> #11 0x0000000000419e73 in main_loop (argc=0, argv=<value optimized out>,
>>> envp=<value optimized out>)
>>> at qemu/vl.c:4246
>>> #12 main (argc=0, argv=<value optimized out>, envp=<value optimized out>)
>>> at qemu/vl.c:6234
>>>
>>>
>>> I'm going to build a debug version to see if I can track this down, but
>>> just wanted to see if anyone has seen this before ...
>>>
>>> Thanks,
>>> Paul
>>>
>>>
>>
>> _______________________________________________
>> http://www.marss86.org
>> Marss86-Devel mailing list
>> [email protected]
>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
>>
>>
>
diff --git a/ptlsim/cache/mesiBus.h b/ptlsim/cache/mesiBus.h
index bdcc6b7..f575286 100644
--- a/ptlsim/cache/mesiBus.h
+++ b/ptlsim/cache/mesiBus.h
@@ -158,7 +158,7 @@ class BusInterconnect : public Interconnect
void print(ostream& os) const {
os << "--Bus-Interconnect: ", get_name(), endl;
foreach(i, controllers.count()) {
- os << "Controller Queue: ", endl;
+ os << "Controller Queue ("<<controllers[i]->controller->get_name()<<"): ", endl;
os << controllers[i]->queue;
}
os << "Pending Request: ", pendingRequests_, endl;
diff --git a/ptlsim/cache/mesiCache.cpp b/ptlsim/cache/mesiCache.cpp
index 65d3081..fe35944 100644
--- a/ptlsim/cache/mesiCache.cpp
+++ b/ptlsim/cache/mesiCache.cpp
@@ -311,7 +311,7 @@ bool CacheController::handle_upper_interconnect(Message &message)
/* Check dependency and access the cache */
CacheQueueEntry* dependsOn = find_dependency(message.request);
- if(dependsOn) {
+ if(dependsOn && !dependsOn->annuled) {
/* Found an dependency */
memdebug("dependent entry: ", *dependsOn, endl);
dependsOn->depends = queueEntry->idx;
@@ -603,9 +603,19 @@ void CacheController::handle_local_hit(CacheQueueEntry *queueEntry)
break;
case MESI_EXCLUSIVE:
if(type == MEMORY_OP_WRITE) {
- if(isLowestPrivate_) {
- queueEntry->line->state = MESI_MODIFIED;
- queueEntry->sendTo = queueEntry->sender;
+
+ // PR: if the memory is right below us, we don't want to treat
+ // this as a miss since that causes deadlocks
+ // TODO: this silly block is used multiple times, but maybe a
+ // isLowestBeforeMainMem_ member would be useful to avoid
+ // this kludge
+#ifdef ENABLE_L3_CACHE
+ if(isLowestPrivate_ || type_ == L3_CACHE) {
+#else
+ if(isLowestPrivate_ || type_ == L2_CACHE) {
+#endif
+ queueEntry->line->state = MESI_MODIFIED;
+ queueEntry->sendTo = queueEntry->sender;
memoryHierarchy_->add_event(&waitInterconnect_,
0, queueEntry);
newState = MESI_MODIFIED;
@@ -782,7 +792,13 @@ void CacheController::complete_request(Message &message,
if(message.request->get_type() == MEMORY_OP_EVICT) {
queueEntry->line->state = MESI_INVALID;
} else {
+ //PR: if memory is right below us, there is no MESI state in this
+ // message, and the line should just be exclusive
+#ifdef ENABLE_L3_CACHE
if(type_ == L3_CACHE) {
+#else
+ if(type_ == L2_CACHE) {
+#endif
/*
* Message is from main memory simply set the
* line state as MESI_EXCLUSIVE
@@ -793,6 +809,7 @@ void CacheController::complete_request(Message &message,
* Message contains a valid argument that has
* the state of the line from lower cache
*/
+ assert(message.arg);
queueEntry->line->state = *((MESICacheLineState*)
message.arg);
}
@@ -830,6 +847,26 @@ bool CacheController::handle_interconnect_cb(void *arg)
}
if(sender == upperInterconnect_ || sender == upperInterconnect2_) {
+
+ //PR: If the message hasData, that means it is headed upward but is being
+ // rebroadcast to us because of the way the Bus interconnect works. In this
+ // case, we shouldn't call handle() and make sure the pending entry is gone
+ // TODO: I believe this entry shoudl already be gone, but just to make sure
+ // I'm re-checking, but it might be extraneous
+
+ if (msg->hasData) {
+
+ memdebug("Ignoring rebroadcast of message to "<<get_name()<<": "<<*msg<<endl);
+ CacheQueueEntry *queueEntry = NULL;
+ foreach_list_mutable(pendingRequests_.list(), queueEntry, entry,
+ prevEntry) {
+ if(msg->request == queueEntry->request) {
+ clear_entry_cb(queueEntry);
+ }
+ }
+ return true;
+ }
+
handle_upper_interconnect(*msg);
} else {
handle_lower_interconnect(*msg);
_______________________________________________
http://www.marss86.org
Marss86-Devel mailing list
[email protected]
https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel