On Mon, 14 Aug 2017, Shivappa Vikas wrote:
On Mon, 14 Aug 2017, Thomas Gleixner wrote:
On Wed, 9 Aug 2017, Vikas Shivappa wrote:
@@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource
*r, struct rdt_domain *d)
GFP_KERNEL);
if (!d->rmid_busy_llc)
return -ENOMEM;
+ INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
+ if (has_busy_rmid(r, d))
+ cqm_setup_limbo_handler(d);
This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How
would a bit be set here?
If we logically offline all cpus in a package and bring it back, the worker
needs to be scheduled on the package if there were busy RMIDs on this
package. Otherwise that RMID never gets freed as its rmid->busy stays 1..
I needed to scan the limbo list and set the bits for all limbo RMIDs after
the alloc and before doing the 'has_busy_rmid' check. Will fix
Tony pointed out that there is no guarentee that a domain will come back up once
its down, so the above issue of rmid->busy staying at > 0 can still happen.
So I will delete this -
if (has_busy_rmid(r, d))
cqm_setup_limbo_handler(d);
and add this when a domain is powered down -
for each rmid in d->rmid_busy_llc
if (--entry->busy)
free_rmid(rmid);
We have no way to know if the L3 was indeed flushed (or package was powered
off). This may lead to incorrect counts in rare scenarios but can document the
same.
Thanks,
vikas