Hi Nitesh,
On Wed, Jun 10, 2020 at 12:12:23PM -0400, Nitesh Narayan Lal wrote:
> This patch-set is originated from one of the patches that have been
> posted earlier as a part of "Task_isolation" mode [1] patch series
> by Alex Belits . There are only a couple of
> changes that I am proposing in this patch-set compared to what Alex
> has posted earlier.
>
>
> Context
> ===
> On a broad level, all three patches that are included in this patch
> set are meant to improve the driver/library to respect isolated
> CPUs by not pinning any job on it. Not doing so could impact
> the latency values in RT use-cases.
>
>
> Patches
> ===
> * Patch1:
> The first patch is meant to make cpumask_local_spread()
> aware of the isolated CPUs. It ensures that the CPUs that
> are returned by this API only includes housekeeping CPUs.
>
> * Patch2:
> This patch ensures that a probe function that is called
> using work_on_cpu() doesn't run any task on an isolated CPU.
>
> * Patch3:
> This patch makes store_rps_map() aware of the isolated
> CPUs so that rps don't queue any jobs on an isolated CPU.
>
>
> Changes
> ===
> To fix the above-mentioned issues Alex has used housekeeping_cpumask().
> The only changes that I am proposing here are:
> - Removing the dependency on CONFIG_TASK_ISOLATION that was proposed by Alex.
> As it should be safe to rely on housekeeping_cpumask()
> even when we don't have any isolated CPUs and we want
> to fall back to using all available CPUs in any of the above scenarios.
> - Using both HK_FLAG_DOMAIN and HK_FLAG_WQ in all three patches, this is
> because we would want the above fixes not only when we have isolcpus but
> also with something like systemd's CPU affinity.
>
>
> Testing
> ===
> * Patch 1:
> Fix for cpumask_local_spread() is tested by creating VFs, loading
> iavf module and by adding a tracepoint to confirm that only housekeeping
> CPUs are picked when an appropriate profile is set up and all remaining CPUs
> when no CPU isolation is required/configured.
>
> * Patch 2:
> To test the PCI fix, I hotplugged a virtio-net-pci from qemu console
> and forced its addition to a specific node to trigger the code path that
> includes the proposed fix and verified that only housekeeping CPUs
> are included via tracepoint. I understand that this may not be the
> best way to test it, hence, I am open to any suggestion to test this
> fix in a better way if required.
>
> * Patch 3:
> To test the fix in store_rps_map(), I tried configuring an isolated
> CPU by writing to /sys/class/net/en*/queues/rx*/rps_cpus which
> resulted in 'write error: Invalid argument' error. For the case
> where a non-isolated CPU is writing in rps_cpus the above operation
> succeeded without any error.
>
> [1]
> https://patchwork.ozlabs.org/project/netdev/patch/51102eebe62336c6a4e584c7a503553b9f90e01c.ca...@marvell.com/
>
> Alex Belits (3):
> lib: restricting cpumask_local_spread to only houskeeping CPUs
> PCI: prevent work_on_cpu's probe to execute on isolated CPUs
> net: restrict queuing of receive packets to housekeeping CPUs
>
> drivers/pci/pci-driver.c | 5 -
> lib/cpumask.c| 43 +++-
> net/core/net-sysfs.c | 10 +-
> 3 files changed, 38 insertions(+), 20 deletions(-)
>
> --
>
Looks good to me.
The flags mechanism is not well organized: this is using HK_FLAG_WQ to
infer nohz_full is being set (while HK_FLAG_WQ should indicate that
non-affined workqueue threads should not run on certain CPUs).
But this is a problem of the flags (which apparently Frederic wants
to fix by exposing a limited number of options to users), and not
of this patch.