Dear Eliot,

Sorry for the late reply.

As far as I can tell, fence.t is such an early proposal that its existence is 
purely in mailing list discussions and a research paper introducing it 
here<https://carrv.github.io/2020/papers/CARRV2020_paper_10_Wistoff.pdf>. 
Despite being a fence, it's more of a "temporal state fence", so shared 
microarchitectural state is supposed to be flushed after it with the idea that 
prevents timing attacks across the fence.

As for x86 - I have seen the clflush implementation, but from what I can tell 
instructions like wbinvd are not implemented, and hence I was considering 
extending the protocol rather than issuing a large amount of line flushes.

Also - it seems RISC-V doesn't seem to have anything like x86's micro-op 
engine, but instead Load/Store templates which are provided with specific 
memory request flags. I was wondering if I could use this existing structure to 
implement a cache flush (like wbinvd), but am obviously unfamiliar with how 
everything is setup in gem5.

Thanks,

Ethan
________________________________
From: Eliot Moss <m...@cs.umass.edu>
Sent: 28 February 2022 22:46
To: gem5 users mailing list <gem5-users@gem5.org>
Cc: Ethan Bannister <qs18...@bristol.ac.uk>
Subject: Re: [gem5-users] Modelling cache flushing on gem5 (RISC-V)

On 2/28/2022 5:26 PM, Ethan Bannister via gem5-users wrote:
 > Hi all,
 >
 > I'm currently undertaking a research project where I am implementing 
 > fence.t, a proposed fence
 > instruction for RISC-V allowing ISA access to clearing microarchitectural 
 > state, and performing
 > relatively coarse assessments of performance impact. As a result, I'm trying 
 > to implement this
 > functionality in gem5.
 >
 > It would be greatly appreciated if someone more well-versed in gem5's memory 
 > model could double
 > check some of my implementation ideas below, so I don't get caught by any 
 > gotchas.
 >
 >  >From what I can tell, starting with the classic cache, the most sensible 
 > way to add this feature
 > is to extend the packet protocol to memory so it includes a new command, 
 > much like FlushReq, but
 > instead, for example, FullFlushReq. Then modify BaseCache::access to handle 
 > this new packet,
 > functionally handling the flush with BaseCache::memWriteback and then 
 > BaseCache::memInvalidate,
 > perhaps with some simulated latency added for the act of 'flushing' the 
 > cache. Since the instruction
 > would need to act like a memory fence (or at the very least, have no memory 
 > requests reordered past
 > it), the IsWriteBarrier and IsReadBarrier flags would be included in the ISA 
 > declaration of the
 > instruction.
 >
 > I may also need to extend Ruby to include a full cache flush instruction - 
 > I've seen other threads
 > on this list with respect to that, but if there are any recent changes or 
 > pertinent information then
 > it'd be greatly appreciated if you could let me know.
 >
 > Also - if there are any resources around on gem5's memory modelling that I 
 > might've missed, other
 > than those in the documentation, please let me know as more stuff to aid 
 > understanding is definitely
 > appreciated.

Dear Ethan - I was able to find fence.tso mentioned online, but not fence.t.

Anyway, from what I am familiar with in gem5 (and I added some custom cache
flushing behavior to an x86 model in the last 6 months), the cache hierarchy
itself is coherent.  Therefore fences need only to control the interaction
between the given cpu (hart in RIC-V terminology, I guess) and the L1 caches.
That functionality was already available in the x86 model, and since we're
talking about the micro-op engine, my guess is that it's there for RISC-V as
well.  A full fence wuold merely prevent issuing any ld/st ops until any ones
in progress are finished.  Again, AFAICT, it's cpu thing, not a cache thing.

Best wishes - Eliot Moss
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to