On 02/04/2020 13:10, John Garry wrote:
On 18/03/2020 20:53, Will Deacon wrote:
As for arm_smmu_cmdq_issue_cmdlist(), I do note that during the testing our
batch size is 1, so we're not seeing the real benefit of the batching. I
can't help but think that we could improve this code to try to combine CMD
SYNCs for small batches.

Anyway, let me know your thoughts or any questions. I'll have a look if a
get a chance for other possible bottlenecks.
Did you ever get any more information on this? I don't have any SMMUv3
hardware any more, so I can't really dig into this myself.



Hi Will,

JFYI, I added some debug in arm_smmu_cmdq_issue_cmdlist() to get some idea of what is going on. Perf annotate did not tell much.

I tested NVMe performance with and without Marc's patchset to spread LPIs for managed interrupts.

Average duration of arm_smmu_cmdq_issue_cmdlist() mainline [all results are approximations]:
owner: 6ms
non-owner: 4ms

mainline + LPI spreading patchset:
owner: 25ms
non-owner: 22ms

For this, a list would be a itlb+cmd_sync.

Please note that the LPI spreading patchset is still giving circa 25% NVMe throughput increase. What happens there would be that we get many more cpus involved, which creates more inter-cpu contention. But the performance increase comes from just alleviating pressure on those overloaded cpus.

I also notice that with the LPI spreading patchset, on average a cpu is an "owner" in arm_smmu_cmdq_issue_cmdlist() 1 in 8, as opposed to 1 in 3 for mainline. This means that we're just creating longer chains of lists to be published.

But I found that for a non-owner, average msi cmd_sync polling time is 12ms with the LPI spreading patchset. As such, it seems to be really taking approx (12*2/8-1=) ~3ms to consume a single list. This seems consistent with my finding that an owner polls consumption for 3ms also. Without the LPI speading patchset, polling time is approx 2 and 3ms for both owner and non-owner, respectively.

As an experiment, I did try to hack the code to use a spinlock again for protecting the command queue, instead of current solution - and always saw a performance drop there. To be expected. But maybe we can try to not use a spinlock, but still serialise production+consumption to alleviate the long polling periods.

Let me know your thoughts.

Cheers,
John

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to