Re: [PATCH v9 4/8] iommu: make map and unmap take a page count, similar to flush

2020-09-17 Thread Wei Liu
On Tue, Sep 15, 2020 at 09:29:32AM +0100, Paul Durrant wrote:
> From: Paul Durrant 
> 
> At the moment iommu_map() and iommu_unmap() take a page order rather than a
> count, whereas iommu_iotlb_flush() takes a page count rather than an order.
> This patch makes them consistent with each other, opting for a page count 
> since
> CPU page orders are not necessarily the same as those of an IOMMU.
> 
> NOTE: The 'page_count' parameter is also made an unsigned long in all the
>   aforementioned functions.
> 
> Signed-off-by: Paul Durrant 
> Reviewed-by: Jan Beulich 
> Reviewed-by: Julien Grall 

Reviewed-by: Wei Liu 



RE: [PATCH v9 4/8] iommu: make map and unmap take a page count, similar to flush

2020-09-16 Thread Tian, Kevin
> From: Paul Durrant 
> Sent: Tuesday, September 15, 2020 4:30 PM
> 
> From: Paul Durrant 
> 
> At the moment iommu_map() and iommu_unmap() take a page order rather
> than a
> count, whereas iommu_iotlb_flush() takes a page count rather than an order.
> This patch makes them consistent with each other, opting for a page count
> since
> CPU page orders are not necessarily the same as those of an IOMMU.
> 
> NOTE: The 'page_count' parameter is also made an unsigned long in all the
>   aforementioned functions.
> 
> Signed-off-by: Paul Durrant 
> Reviewed-by: Jan Beulich 
> Reviewed-by: Julien Grall 

Reviewed-by: Kevin Tian 



[PATCH v9 4/8] iommu: make map and unmap take a page count, similar to flush

2020-09-15 Thread Paul Durrant
From: Paul Durrant 

At the moment iommu_map() and iommu_unmap() take a page order rather than a
count, whereas iommu_iotlb_flush() takes a page count rather than an order.
This patch makes them consistent with each other, opting for a page count since
CPU page orders are not necessarily the same as those of an IOMMU.

NOTE: The 'page_count' parameter is also made an unsigned long in all the
  aforementioned functions.

Signed-off-by: Paul Durrant 
Reviewed-by: Jan Beulich 
Reviewed-by: Julien Grall 
---
Cc: Andrew Cooper 
Cc: Wei Liu 
Cc: "Roger Pau Monné" 
Cc: George Dunlap 
Cc: Ian Jackson 
Cc: Stefano Stabellini 
Cc: Jun Nakajima 
Cc: Kevin Tian 
Cc: Bertrand Marquis 
Cc: Oleksandr Tyshchenko 

v8:
 - Fix IPMMU-VMSA code too

v7:
 - Fix ARM build

v6:
 - Fix missing conversion to unsigned long in AMD code
 - Fixed unconverted call to iommu_legacy_unmap() in x86/mm.c
 - s/1ul/1 in the grant table code

v5:
 - Re-worked to just use a page count, rather than both a count and an order

v2:
 - New in v2
---
 xen/arch/x86/mm.c|  8 --
 xen/arch/x86/mm/p2m-ept.c|  6 ++--
 xen/arch/x86/mm/p2m-pt.c |  4 +--
 xen/arch/x86/mm/p2m.c|  5 ++--
 xen/arch/x86/x86_64/mm.c |  4 +--
 xen/common/grant_table.c |  6 ++--
 xen/drivers/passthrough/amd/iommu.h  |  2 +-
 xen/drivers/passthrough/amd/iommu_map.c  |  4 +--
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |  2 +-
 xen/drivers/passthrough/arm/smmu.c   |  2 +-
 xen/drivers/passthrough/iommu.c  | 35 +++-
 xen/drivers/passthrough/vtd/iommu.c  |  4 +--
 xen/drivers/passthrough/x86/iommu.c  |  2 +-
 xen/include/xen/iommu.h  | 12 
 14 files changed, 48 insertions(+), 48 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 42a6dc9ba4..9447dabc50 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2441,7 +2441,7 @@ static int cleanup_page_mappings(struct page_info *page)
 
 if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
 {
-int rc2 = iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K);
+int rc2 = iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_4K);
 
 if ( !rc )
 rc = rc2;
@@ -2968,9 +2968,11 @@ static int _get_page_type(struct page_info *page, 
unsigned long type,
 mfn_t mfn = page_to_mfn(page);
 
 if ( (x & PGT_type_mask) == PGT_writable_page )
-rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), PAGE_ORDER_4K);
+rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
+1ul << PAGE_ORDER_4K);
 else
-rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORDER_4K,
+rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
+  1ul << PAGE_ORDER_4K,
   IOMMUF_readable | IOMMUF_writable);
 
 if ( unlikely(rc) )
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b8154a7ecc..12cf38f6eb 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -843,14 +843,14 @@ out:
  need_modify_vtd_table )
 {
 if ( iommu_use_hap_pt(d) )
-rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order),
+rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
(iommu_flags ? IOMMU_FLUSHF_added : 0) |
(vtd_pte_present ? IOMMU_FLUSHF_modified
 : 0));
 else if ( need_iommu_pt_sync(d) )
 rc = iommu_flags ?
-iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) :
-iommu_legacy_unmap(d, _dfn(gfn), order);
+iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags) 
:
+iommu_legacy_unmap(d, _dfn(gfn), 1ul << order);
 }
 
 unmap_domain_page(table);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index badb26bc34..3af51be78e 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -679,9 +679,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t 
mfn,
 if ( need_iommu_pt_sync(p2m->domain) &&
  (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
 rc = iommu_pte_flags
- ? iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
+ ? iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << page_order,
 iommu_pte_flags)
- : iommu_legacy_unmap(d, _dfn(gfn), page_order);
+ : iommu_legacy_unmap(d, _dfn(gfn), 1ul << page_order);
 
 /*
  * Free old intermediate tables if necessary.  This has to be the
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index db7bde0230..928344be30 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen