Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-19 Thread Robin Murphy

On 19/03/2019 07:59, Lu Baolu wrote:

Hi Christoph,

On 3/14/19 12:10 AM, Christoph Hellwig wrote:

On Wed, Mar 13, 2019 at 10:31:52AM +0800, Lu Baolu wrote:

Hi again,

On 3/13/19 10:04 AM, Lu Baolu wrote:

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



Just looked into the code again. At least we could reuse below
functions:

swiotlb_tbl_map_single()
swiotlb_tbl_unmap_single()
swiotlb_tbl_sync_single()

Anything else?


Yes, that is probably about the level you want to reuse, given that the
next higher layer already has hooks into the direct mapping code.



I am trying to change my code to reuse swiotlb. But I found that swiotlb
might not be suitable for my case.

Below is what I got with swiotlb_map():

phy_addr    size    tlb_addr

0x167eec330 0x8 0x85dc6000
0x167eef5c0 0x40    0x85dc6800
0x167eec330 0x8 0x85dc7000
0x167eef5c0 0x40    0x85dc7800

But what I expected to get is:

phy_addr    size    tlb_addr

0x167eec330 0x8 0xA330
0x167eef5c0 0x40    0xB5c0
0x167eec330 0x8 0xC330
0x167eef5c0 0x40    0xD5c0

, where 0xXX000 is the physical address of a bounced page.

Basically, I want a bounce page to replace a leaf page in the vt-d page
table, which maps a buffer with size less than a PAGE_SIZE.


I'd imagine the thing to do would be to factor out the slot allocation 
in swiotlb_tbl_map_single() so that an IOMMU page pool/allocator can be 
hooked in as an alternative.


However we implement it, though, this should absolutely be a common 
IOMMU thing that all relevant DMA backends can opt into, and not 
specific to VT-d. I mean, it's already more or less the same concept as 
the PowerPC secure VM thing.


Robin.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-19 Thread Lu Baolu

Hi Christoph,

On 3/14/19 12:10 AM, Christoph Hellwig wrote:

On Wed, Mar 13, 2019 at 10:31:52AM +0800, Lu Baolu wrote:

Hi again,

On 3/13/19 10:04 AM, Lu Baolu wrote:

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



Just looked into the code again. At least we could reuse below
functions:

swiotlb_tbl_map_single()
swiotlb_tbl_unmap_single()
swiotlb_tbl_sync_single()

Anything else?


Yes, that is probably about the level you want to reuse, given that the
next higher layer already has hooks into the direct mapping code.



I am trying to change my code to reuse swiotlb. But I found that swiotlb
might not be suitable for my case.

Below is what I got with swiotlb_map():

phy_addrsizetlb_addr

0x167eec330 0x8 0x85dc6000
0x167eef5c0 0x400x85dc6800
0x167eec330 0x8 0x85dc7000
0x167eef5c0 0x400x85dc7800

But what I expected to get is:

phy_addrsizetlb_addr

0x167eec330 0x8 0xA330
0x167eef5c0 0x400xB5c0
0x167eec330 0x8 0xC330
0x167eef5c0 0x400xD5c0

, where 0xXX000 is the physical address of a bounced page.

Basically, I want a bounce page to replace a leaf page in the vt-d page
table, which maps a buffer with size less than a PAGE_SIZE.

Best regards,
Lu Baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-13 Thread Lu Baolu

Hi,

On 3/14/19 12:10 AM, Christoph Hellwig wrote:

On Wed, Mar 13, 2019 at 10:31:52AM +0800, Lu Baolu wrote:

Hi again,

On 3/13/19 10:04 AM, Lu Baolu wrote:

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



Just looked into the code again. At least we could reuse below
functions:

swiotlb_tbl_map_single()
swiotlb_tbl_unmap_single()
swiotlb_tbl_sync_single()

Anything else?


Yes, that is probably about the level you want to reuse, given that the
next higher layer already has hooks into the direct mapping code.



Okay. Thank you!

I will try to make this happen in v2.

Best regards,
Lu Baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-13 Thread Christoph Hellwig
On Wed, Mar 13, 2019 at 10:31:52AM +0800, Lu Baolu wrote:
> Hi again,
> 
> On 3/13/19 10:04 AM, Lu Baolu wrote:
> > Hi,
> > 
> > On 3/13/19 12:38 AM, Christoph Hellwig wrote:
> > > On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:
> > > > This adds the APIs for bounce buffer specified domain
> > > > map() and unmap(). The start and end partial pages will
> > > > be mapped with bounce buffered pages instead. This will
> > > > enhance the security of DMA buffer by isolating the DMA
> > > > attacks from malicious devices.
> > > 
> > > Please reuse the swiotlb code instead of reinventing it.
> > > 
> 
> Just looked into the code again. At least we could reuse below
> functions:
> 
> swiotlb_tbl_map_single()
> swiotlb_tbl_unmap_single()
> swiotlb_tbl_sync_single()
> 
> Anything else?

Yes, that is probably about the level you want to reuse, given that the
next higher layer already has hooks into the direct mapping code.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-12 Thread Lu Baolu

Hi again,

On 3/13/19 10:04 AM, Lu Baolu wrote:

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



Just looked into the code again. At least we could reuse below
functions:

swiotlb_tbl_map_single()
swiotlb_tbl_unmap_single()
swiotlb_tbl_sync_single()

Anything else?

Best regards,
Lu Baolu



I don't think we are doing the same thing as swiotlb here. But it's
always a good thing to reuse code if possible. I considered this when
writing this code, but I found it's hard to do. Do you mind pointing
me to the code which I could reuse?

Best regards,
Lu Baolu


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-12 Thread Lu Baolu

Hi,

On 3/13/19 12:38 AM, Christoph Hellwig wrote:

On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:

This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.


Please reuse the swiotlb code instead of reinventing it.



I don't think we are doing the same thing as swiotlb here. But it's
always a good thing to reuse code if possible. I considered this when
writing this code, but I found it's hard to do. Do you mind pointing
me to the code which I could reuse?

Best regards,
Lu Baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-12 Thread Christoph Hellwig
On Tue, Mar 12, 2019 at 02:00:00PM +0800, Lu Baolu wrote:
> This adds the APIs for bounce buffer specified domain
> map() and unmap(). The start and end partial pages will
> be mapped with bounce buffered pages instead. This will
> enhance the security of DMA buffer by isolating the DMA
> attacks from malicious devices.

Please reuse the swiotlb code instead of reinventing it.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 4/9] iommu/vt-d: Add bounce buffer API for domain map/unmap

2019-03-12 Thread Lu Baolu
This adds the APIs for bounce buffer specified domain
map() and unmap(). The start and end partial pages will
be mapped with bounce buffered pages instead. This will
enhance the security of DMA buffer by isolating the DMA
attacks from malicious devices.

Cc: Ashok Raj 
Cc: Jacob Pan 
Signed-off-by: Lu Baolu 
Tested-by: Xu Pengfei 
Tested-by: Mika Westerberg 
---
 drivers/iommu/intel-iommu.c   |   3 +
 drivers/iommu/intel-pgtable.c | 305 +-
 include/linux/intel-iommu.h   |   7 +
 3 files changed, 311 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 791261afb4a9..305731ec142e 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1724,6 +1724,7 @@ static struct dmar_domain *alloc_domain(int flags)
domain->flags = flags;
domain->has_iotlb_device = false;
INIT_LIST_HEAD(>devices);
+   idr_init(>bounce_idr);
 
return domain;
 }
@@ -1919,6 +1920,8 @@ static void domain_exit(struct dmar_domain *domain)
 
dma_free_pagelist(freelist);
 
+   idr_destroy(>bounce_idr);
+
free_domain_mem(domain);
 }
 
diff --git a/drivers/iommu/intel-pgtable.c b/drivers/iommu/intel-pgtable.c
index ad3347d7ac1d..e8317982c5ab 100644
--- a/drivers/iommu/intel-pgtable.c
+++ b/drivers/iommu/intel-pgtable.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 
+#defineMAX_BOUNCE_LIST_ENTRIES 32
+
 struct addr_walk {
int (*low)(struct dmar_domain *domain, dma_addr_t addr,
phys_addr_t paddr, size_t size,
@@ -27,6 +29,13 @@ struct addr_walk {
struct bounce_param *param);
 };
 
+struct bounce_cookie {
+   struct page *bounce_page;
+   phys_addr_t original_phys;
+   phys_addr_t bounce_phys;
+   struct list_headlist;
+};
+
 /*
  * Bounce buffer support for external devices:
  *
@@ -42,6 +51,14 @@ static inline unsigned long domain_page_size(struct 
dmar_domain *domain)
return 1UL << __ffs(domain->domain.pgsize_bitmap);
 }
 
+/*
+ * Bounce buffer cookie lazy allocation. A list to keep the unused
+ * bounce buffer cookies with a spin lock to protect the access.
+ */
+static LIST_HEAD(bounce_list);
+static DEFINE_SPINLOCK(bounce_lock);
+static int bounce_list_entries;
+
 /* Calculate how many pages does a range of [addr, addr + size) cross. */
 static inline unsigned long
 range_nrpages(dma_addr_t addr, size_t size, unsigned long page_size)
@@ -51,10 +68,274 @@ range_nrpages(dma_addr_t addr, size_t size, unsigned long 
page_size)
return ALIGN((addr & offset) + size, page_size) >> __ffs(page_size);
 }
 
-int domain_walk_addr_range(const struct addr_walk *walk,
-  struct dmar_domain *domain,
-  dma_addr_t addr, phys_addr_t paddr,
-  size_t size, struct bounce_param *param)
+static int nobounce_map_middle(struct dmar_domain *domain, dma_addr_t addr,
+  phys_addr_t paddr, size_t size,
+  struct bounce_param *param)
+{
+   return domain_iomap_range(domain, addr, paddr, size, param->prot);
+}
+
+static int nobounce_unmap_middle(struct dmar_domain *domain, dma_addr_t addr,
+phys_addr_t paddr, size_t size,
+struct bounce_param *param)
+{
+   struct page **freelist = param->freelist, *new;
+
+   new = domain_iounmap_range(domain, addr, size);
+   if (new) {
+   new->freelist = *freelist;
+   *freelist = new;
+   }
+
+   return 0;
+}
+
+static inline void free_bounce_cookie(struct bounce_cookie *cookie)
+{
+   if (!cookie)
+   return;
+
+   free_page((unsigned long)page_address(cookie->bounce_page));
+   kfree(cookie);
+}
+
+static struct bounce_cookie *
+domain_get_bounce_buffer(struct dmar_domain *domain, unsigned long iova_pfn)
+{
+   struct bounce_cookie *cookie;
+   unsigned long flags;
+   int ret;
+
+   spin_lock_irqsave(_lock, flags);
+   cookie = idr_find(>bounce_idr, iova_pfn);
+   if (WARN_ON(cookie)) {
+   spin_unlock_irqrestore(_lock, flags);
+   pr_warn("bounce cookie for iova_pfn 0x%lx exists\n", iova_pfn);
+
+   return NULL;
+   }
+
+   /* Check the bounce list. */
+   cookie = list_first_entry_or_null(_list,
+ struct bounce_cookie, list);
+   if (cookie) {
+   list_del_init(>list);
+   bounce_list_entries--;
+   spin_unlock_irqrestore(_lock, flags);
+   goto skip_alloc;
+   }
+   spin_unlock_irqrestore(_lock, flags);
+
+   /* We have to allocate a new cookie. */
+   cookie = kzalloc(sizeof(*cookie), GFP_ATOMIC);
+   if (!cookie)
+   return NULL;
+
+   cookie->bounce_page =