Reinette, On Mon, 13 Nov 2017, Reinette Chatre wrote:
thanks for that interesting work. Before I start looking into the details in the next days let me ask a few general questions first. > Cache Allocation Technology (CAT), part of Intel(R) Resource Director > Technology (Intel(R) RDT), enables a user to specify the amount of cache > space into which an application can fill. Cache pseudo-locking builds on > the fact that a CPU can still read and write data pre-allocated outside > its current allocated area on cache hit. With cache pseudo-locking data > can be preloaded into a reserved portion of cache that no application can > fill, and from that point on will only serve cache hits. The cache > pseudo-locked memory is made accessible to user space where an application > can map it into its virtual address space and thus have a region of > memory with reduced average read latency. Did you compare that against the good old cache coloring mechanism, e.g. palloc ? > The cache pseudo-locking approach relies on generation-specific behavior > of processors. It may provide benefits on certain processor generations, > but is not guaranteed to be supported in the future. Hmm, are you saying that the CAT mechanism might change radically in the future so that access to cached data in an allocated area which does not belong to the current executing context wont work anymore? > It is not a guarantee that data will remain in the cache. It is not a > guarantee that data will remain in certain levels or certain regions of > the cache. Rather, cache pseudo-locking increases the probability that > data will remain in a certain level of the cache via carefully > configuring the CAT feature and carefully controlling application > behavior. Which kind of applications are you targeting with that? Are there real world use cases which actually can benefit from this and what are those applications supposed to do once the feature breaks with future generations of processors? Thanks, tglx