Hi Boris,
Thank you very much for code review.
I will fix these issues in v2 patch.
Please find more comments inline.
Thank you.
On 4/15/2019 19:34, Borislav Petkov wrote:
On Wed, Apr 10, 2019 at 03:55:28AM +0800, Xiaochen Shen wrote:
Currently when a new resource group is created, the allocation values
of MBA resource are not initialized and remain meaningless data.
For example:
mkdir /sys/fs/resctrl/p1
cat /sys/fs/resctrl/p1/schemata
MB:0=100;1=100
echo "MB:0=10;1=20" > /sys/fs/resctrl/p1/schemata
cat /sys/fs/resctrl/p1/schemata
MB:0= 10;1= 20
rmdir /sys/fs/resctrl/p1
mkdir /sys/fs/resctrl/p2
cat /sys/fs/resctrl/p2/schemata
MB:0= 10;1= 20
When the new group is created, it is reasonable to initialize MBA
resource with default values.
Initialize MBA resource and cache resources in separate functions.
Please format your commit message by indenting the examples:
OK. Thank you.
x86/resctrl: Initialize a new resource group with default MBA values
Currently, when a new resource group is created, the allocation values
of the MBA resource are not initialized and remain meaningless data.
For example:
mkdir /sys/fs/resctrl/p1
cat /sys/fs/resctrl/p1/schemata
MB:0=100;1=100
echo "MB:0=10;1=20" > /sys/fs/resctrl/p1/schemata
cat /sys/fs/resctrl/p1/schemata
MB:0= 10;1= 20
rmdir /sys/fs/resctrl/p1
mkdir /sys/fs/resctrl/p2
cat /sys/fs/resctrl/p2/schemata
MB:0= 10;1= 20
Therefore, when the new group is created, it is reasonable to initialize
MBA resource with default values.
Initialize the MBA resource and cache resources in separate functions."
Thx.
Signed-off-by: Xiaochen Shen <xiaochen.s...@intel.com>
Reviewed-by: Fenghua Yu <fenghua...@intel.com>
Reviewed-by: Reinette Chatre <reinette.cha...@intel.com>
---
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 4 +-
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 139 ++++++++++++++++--------------
2 files changed, 75 insertions(+), 68 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 2dbd990..576bb6a 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -342,10 +342,10 @@ int update_domains(struct rdt_resource *r, int closid)
if (cpumask_empty(cpu_mask) || mba_sc)
goto done;
cpu = get_cpu();
- /* Update CBM on this cpu if it's in cpu_mask. */
+ /* Update resource control msr on this cpu if it's in cpu_mask. */
s/cpu/CPU/g
OK.
if (cpumask_test_cpu(cpu, cpu_mask))
rdt_ctrl_update(&msr_param);
- /* Update CBM on other cpus. */
+ /* Update resource control msr on other cpus. */
Ditto.
OK.
smp_call_function_many(cpu_mask, rdt_ctrl_update, &msr_param, 1);
put_cpu();
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 08e0333..9f12a02 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -2516,8 +2516,8 @@ static void cbm_ensure_valid(u32 *_val, struct
rdt_resource *r)
bitmap_clear(val, zero_bit, cbm_len - zero_bit);
}
-/**
- * rdtgroup_init_alloc - Initialize the new RDT group's allocations
+/*
+ * Initialize cache resources with default values.
*
* A new RDT group is being created on an allocation capable (CAT)
* supporting system. Set this group up to start off with all usable
@@ -2526,85 +2526,92 @@ static void cbm_ensure_valid(u32 *_val, struct
rdt_resource *r)
* All-zero CBM is invalid. If there are no more shareable bits available
* on any domain then the entire allocation will fail.
*/
-static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
+static int rdtgroup_init_cat(struct rdt_resource *r, u32 closid)
{
struct rdt_resource *r_cdp = NULL;
struct rdt_domain *d_cdp = NULL;
u32 used_b = 0, unused_b = 0;
- u32 closid = rdtgrp->closid;
- struct rdt_resource *r;
unsigned long tmp_cbm;
enum rdtgrp_mode mode;
struct rdt_domain *d;
u32 peer_ctl, *ctrl;
- int i, ret;
+ int i;
- for_each_alloc_enabled_rdt_resource(r) {
+ list_for_each_entry(d, &r->domains, list) {
+ rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp);
+ d->have_new_ctrl = false;
+ d->new_ctrl = r->cache.shareable_bits;
+ used_b = r->cache.shareable_bits;
+ ctrl = d->ctrl_val;
+ for (i = 0; i < closids_supported(); i++, ctrl++) {
+ if (closid_allocated(i) && i != closid) {
+ mode = rdtgroup_mode_by_closid(i);
+ if (mode == RDT_MODE_PSEUDO_LOCKSETUP)
+ break;
+ /*
+ * If CDP is active include peer
+ * domain's usage to ensure there
+ * is no overlap with an exclusive
+ * group.
+ */
+ if (d_cdp)
+ peer_ctl = d_cdp->ctrl_val[i];
+ else
+ peer_ctl = 0;
+ used_b |= *ctrl | peer_ctl;
+ if (mode == RDT_MODE_SHAREABLE)
+ d->new_ctrl |= *ctrl | peer_ctl;
+ }
+ }
+ if (d->plr && d->plr->cbm > 0)
+ used_b |= d->plr->cbm;
+ unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1);
+ unused_b &= BIT_MASK(r->cache.cbm_len) - 1;
+ d->new_ctrl |= unused_b;
+ cbm_ensure_valid(&d->new_ctrl, r);
/*
- * Only initialize default allocations for CBM cache
- * resources
+ * Assign the u32 CBM to an unsigned long to ensure
+ * that bitmap_weight() does not access out-of-bound
+ * memory.
*/
So all this code working on a rdt_domain *d pointer could be carved out
into a separate function called something like:
__init_one_rdt_domain(d, ...)
and this will make the code more readable and save us at least 2
indentation levels.
Please do that in a preparatory patch.
Good suggestion. I will do it in v2 patch.
- if (r->rid == RDT_RESOURCE_MBA)
- continue;
Then, after having done that, it would be very obvious when you do this
above because you won't be calling that __init_one_rdt_domain() function
for an MBA anyway. >
Thx.
In this patch we initialize MBA resource and cache resources in separate
functions rdtgroup_init_cat() and rdtgroup_init_mba(). If
__init_one_rdt_domain() is only called by rdtgroup_init_cat(), how about
using function name "__init_one_rdt_domain_cat()"?
Thank you.
Best regards,
Xiaochen