Note that all the caches in the system use the same code, so if you only set
some of them to direct mapped, then only some instances will be direct
mapped, and others won't.  Somehow you need to figure out which instances
you want to enable your modifications on, and which ones you don't; the
elegant way to do that is to add a boolean parameter to the BaseCache
object.

You can't get rid of the I/O cache, it's necessary to make I/O requests work
with the coherence protocol.

Steve

On Mon, Dec 6, 2010 at 6:19 PM, Navid Farazmand <[email protected]> wrote:

> Thanks Steve.
> I have another question for you.
> I have implemented two algorithms (including the one we are talking about)
> which are based on the assumption of direct mapped cache. Actually
> I coded based on this assumption. For example, I might have both
> sets[i].blks[0] and sets[i].blks[assoc-1] in my code to access the block in
> a direct mapped cache since I modified the existing code and...
> Both of the algorithms don't work while they are pretty simple. I finally
> realized that, the associativity in my simulations is set to 2 and not 1.
> This might be the cause of the problems (or part of my problems).
>
> I have modified LRU TagStore. I  use '--l1i_assoc=1' and '--l1d_assoc=1'.
> But these switches doesn't work. Anybody knows the reason and the possible
> solution?
> Is there any way that I get rid of ioCache?
>
> Regards,
> Navid.
>
>
> On Mon, Dec 6, 2010 at 6:04 PM, Steve Reinhardt <[email protected]> wrote:
>
>> Off the top of my head I don't see any reason why returning NULL from
>> allocateBlock as you described in your first email wouldn't work.  I'm not
>> sure how much that path gets exercised normally though, so if you're using
>> it heavily it's possible you're inducing some previously unknown bug there.
>>
>> As far as the panic goes, it probably means you're breaking coherence
>> somewhere and bad memory values are forcing the kernel to panic.
>> Unfortunately that's a tough one to debug; your best bet is to enable
>> tracing with '--trace-flag Cache' and see what's really happening in the
>> cycles leading up to the panic.
>>
>> Steve
>>
>> On Mon, Dec 6, 2010 at 1:40 PM, Navid Farazmand <[email protected]>wrote:
>>
>>> I'm getting the following error over and over and exactly at the same
>>> cycle.
>>> I even modified the code. Instead of returning NULL in
>>> findVictim->allocateBlock-> I added a dummy block to the TagStore class.
>>> When I want the prevent address B from replacing A,
>>> I return the dummy block in handleFill for B. In 'insertBlock' function
>>> if I see the dummy variable eliminate most operations (except, for example,
>>> setting dummyBlk->tag since it will be checked later).
>>>
>>>
>>> ********************************************************************************************
>>> panic: M5 panic instruction called at pc=0xfffffc00000138a0.
>>>  @ cycle 7221328000
>>> [execute:build/ALPHA_FS/arch/alpha/atomic_simple_cpu_exec.cc, line 11282]
>>> Memory Usage: 736760 KBytes
>>> For more information see: http://www.m5sim.org/panic/116b925c
>>> Program aborted at cycle 7221328000
>>>
>>> ********************************************************************************************
>>>
>>> I really need your suggestion (due to my deadline) and I appreciate your
>>> help.
>>>
>>> Regards,
>>> Navid.
>>>
>>> On Mon, Dec 6, 2010 at 12:21 PM, Navid Farazmand <[email protected]>wrote:
>>>
>>>> Hi,
>>>> I am trying to implement some cache replacement/management in M5.
>>>> I have problem with one of them which is quite simple and involves
>>>> little modification.
>>>>
>>>> It's called dynamic exclusion policy. Based on some history of the
>>>> accesses, I want to prevent some accesses to replace the corresponding
>>>> block.
>>>> I just need direct mapped cache. So, let's say, normally after a miss on
>>>> B, B is fetched from next level cache (or memory) and the replaces A.
>>>> Assume that I know that I don't want B to replace A.
>>>> After a miss is satisfied from the memory, using 'handleFill' data would
>>>> be replaced.
>>>>
>>>> ***************************************************************************
>>>> } else if (bus_pkt->isRead() ||
>>>>                            bus_pkt->cmd == MemCmd::UpgradeResp) {
>>>>                     // we're updating cache state to allow us to
>>>>                     // satisfy the upstream request from the cache
>>>>                     blk = handleFill(bus_pkt, blk, writebacks);
>>>>                     satisfyCpuSideRequest(pkt, blk);
>>>> }
>>>>
>>>> ***************************************************************************
>>>>
>>>> And here is the relevant part in handle fill:
>>>>
>>>>
>>>>
>>>> ***************************************************************************
>>>>     if (blk == NULL) {
>>>>         // better have read new data...
>>>>         assert(pkt->hasData());
>>>>         // need to do a replacement
>>>>         blk = allocateBlock(addr, writebacks);
>>>>         if (blk == NULL) {
>>>>             // No replaceable block... just use temporary storage to
>>>>             // complete the current request and then get rid of it
>>>>             assert(!tempBlock->isValid());
>>>>             blk = tempBlock;
>>>>             tempBlock->set = tags->extractSet(addr);
>>>>             tempBlock->tag = tags->extractTag(addr);
>>>>             DPRINTF(Cache, "using temp block for %x\n", addr);
>>>>         } else {
>>>>             int id = pkt->req->hasContextId() ? pkt->req->contextId() :
>>>> -1;
>>>>             tags->insertBlock(pkt->getAddr(), blk, id);
>>>>         }
>>>>
>>>>
>>>> ***************************************************************************
>>>> Since it has been a miss, the first blk==NULL is satisfied. I thought
>>>> that, whenever I want to prevent B from replacing A I can simply return a
>>>> NULL pointer in allocateBlock.
>>>> Is this gonna work? For example, assume that findVictim always returns
>>>> NULL meaning that no replacement is carried out (Cache lines/blocks are
>>>> filled only when they are empty/invalid).
>>>> Shouldn't all requests become satisfied from memory and simulation run
>>>> correctly? I just modified findVictim to always return NULL to test this
>>>> characteristic. Neither this way, nor when I
>>>> selectively return NULL for findVictim (and hence for allocateBlock) it
>>>> works. I expected that using tmpBlock the requests are satisfied in these
>>>> cases.
>>>> If my expectation is incorrect, anybody knows how can I prevent a based
>>>> on the address of the new request and the current resident of the block?
>>>>
>>>> Thank you,
>>>> Navid.
>>>>
>>>
>>>
>>> _______________________________________________
>>> m5-users mailing list
>>> [email protected]
>>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>>
>>
>>
>> _______________________________________________
>> m5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>
>
>
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to