Re: [PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-05-11 Thread Logan Gunthorpe


On 2021-05-11 10:06 a.m., Don Dutile wrote:
> On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is not
>> created if the scatterlist only consists of P2PDMA pages.
>>
>> Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
>> indicate bus address segments. On unmap, P2PDMA segments are skipped
>> over when determining the start and end IOVA addresses.
>>
>> With this change, the flags variable in the dma_map_ops is
>> set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
>> P2PDMA pages.
>>
>> Signed-off-by: Logan Gunthorpe 
> So, this code prevents use of p2pdma using an IOMMU, which wasn't checked and
> short-circuited by other checks to use dma-direct?

No, not at all. This patch is adding support for p2pdma pages for IOMMUs
that use the dma-iommu abstraction. Other arch specific IOMMUs that
don't use the dma-iommu abstraction are left unsupported. Support would
need to be added to them, or better yet; they should be ported to dma-iommu.

> 
> So my overall comment to this code & related comments is that it should be 
> sprinkled
> with notes like "doesn't support IOMMU" and / or "TODO" when/if IOMMU is to 
> be supported.
> Or, if IOMMU-based p2pdma isn't supported in these routines directly, 
> where/how they will be supported?
> 
>> ---
>>   drivers/iommu/dma-iommu.c | 66 ++-
>>   1 file changed, 58 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index af765c813cc8..ef49635f9819 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -20,6 +20,7 @@
>>   #include 
>>   #include 
>>   #include 
>> +#include 
>>   #include 
>>   #include 
>>   #include 
>> @@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct 
>> scatterlist *sg, int nents,
>>  sg_dma_address(s) = DMA_MAPPING_ERROR;
>>  sg_dma_len(s) = 0;
>>   
>> +if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
>> +if (i > 0)
>> +cur = sg_next(cur);
>> +
>> +pci_p2pdma_map_bus_segment(s, cur);
>> +count++;
>> +cur_len = 0;
>> +continue;
>> +}
>> +
>>  /*
>>   * Now fill in the real DMA data. If...
>>   * - there is a valid output segment to append to
>> @@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct 
>> scatterlist *sg,
>>  struct iova_domain *iovad = &cookie->iovad;
>>  struct scatterlist *s, *prev = NULL;
>>  int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
>> +struct dev_pagemap *pgmap = NULL;
>> +enum pci_p2pdma_map_type map_type;
>>  dma_addr_t iova;
>>  size_t iova_len = 0;
>>  unsigned long mask = dma_get_seg_boundary(dev);
>> -int i;
>> +int i, ret = 0;
>>   
>>  if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>>  iommu_deferred_attach(dev, domain))
>> @@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct 
>> scatterlist *sg,
>>  s_length = iova_align(iovad, s_length + s_iova_off);
>>  s->length = s_length;
>>   
>> +if (is_pci_p2pdma_page(sg_page(s))) {
>> +if (sg_page(s)->pgmap != pgmap) {
>> +pgmap = sg_page(s)->pgmap;
>> +map_type = pci_p2pdma_map_type(pgmap, dev,
>> +   attrs);
>> +}
>> +
>> +switch (map_type) {
>> +case PCI_P2PDMA_MAP_BUS_ADDR:
>> +/*
>> + * A zero length will be ignored by
>> + * iommu_map_sg() and then can be detected
>> + * in __finalise_sg() to actually map the
>> + * bus address.
>> + */
>> +s->length = 0;
>> +continue;
> 
>> +case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
>> +break;
> So, this 'short-circuits' the use of the IOMMU, silently?
> This seems ripe for users to enable IOMMU for secure computing reasons, and 
> using/enabling p2pdma,
> and not realizing that it isn't as secure as 1+1=2  appears to be.
> If my understanding is wrong, please point me to the Documentation or code 
> that corrects this mis-understanding.  I could have missed a warning when 
> both are enabled in a past patch set.


Yes, you've misunderstood this. Part of this dovetails with your comment
about the documentation for PCI_P2PDMA_MAP_THRU_HOST_BRIDGE.

T

Re: [PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-05-11 Thread Don Dutile

On 4/8/21 1:01 PM, Logan Gunthorpe wrote:

When a PCI P2PDMA page is seen, set the IOVA length of the segment
to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
apply the appropriate bus address to the segment. The IOVA is not
created if the scatterlist only consists of P2PDMA pages.

Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
indicate bus address segments. On unmap, P2PDMA segments are skipped
over when determining the start and end IOVA addresses.

With this change, the flags variable in the dma_map_ops is
set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
P2PDMA pages.

Signed-off-by: Logan Gunthorpe 

So, this code prevents use of p2pdma using an IOMMU, which wasn't checked and
short-circuited by other checks to use dma-direct?

So my overall comment to this code & related comments is that it should be 
sprinkled
with notes like "doesn't support IOMMU" and / or "TODO" when/if IOMMU is to be 
supported.
Or, if IOMMU-based p2pdma isn't supported in these routines directly, where/how 
they will be supported?


---
  drivers/iommu/dma-iommu.c | 66 ++-
  1 file changed, 58 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index af765c813cc8..ef49635f9819 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -20,6 +20,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct 
scatterlist *sg, int nents,
sg_dma_address(s) = DMA_MAPPING_ERROR;
sg_dma_len(s) = 0;
  
+		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {

+   if (i > 0)
+   cur = sg_next(cur);
+
+   pci_p2pdma_map_bus_segment(s, cur);
+   count++;
+   cur_len = 0;
+   continue;
+   }
+
/*
 * Now fill in the real DMA data. If...
 * - there is a valid output segment to append to
@@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
struct iova_domain *iovad = &cookie->iovad;
struct scatterlist *s, *prev = NULL;
int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
+   struct dev_pagemap *pgmap = NULL;
+   enum pci_p2pdma_map_type map_type;
dma_addr_t iova;
size_t iova_len = 0;
unsigned long mask = dma_get_seg_boundary(dev);
-   int i;
+   int i, ret = 0;
  
  	if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&

iommu_deferred_attach(dev, domain))
@@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
s_length = iova_align(iovad, s_length + s_iova_off);
s->length = s_length;
  
+		if (is_pci_p2pdma_page(sg_page(s))) {

+   if (sg_page(s)->pgmap != pgmap) {
+   pgmap = sg_page(s)->pgmap;
+   map_type = pci_p2pdma_map_type(pgmap, dev,
+  attrs);
+   }
+
+   switch (map_type) {
+   case PCI_P2PDMA_MAP_BUS_ADDR:
+   /*
+* A zero length will be ignored by
+* iommu_map_sg() and then can be detected
+* in __finalise_sg() to actually map the
+* bus address.
+*/
+   s->length = 0;
+   continue;



+   case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
+   break;

So, this 'short-circuits' the use of the IOMMU, silently?
This seems ripe for users to enable IOMMU for secure computing reasons, and 
using/enabling p2pdma,
and not realizing that it isn't as secure as 1+1=2  appears to be.
If my understanding is wrong, please point me to the Documentation or code that 
corrects this mis-understanding.  I could have missed a warning when both are 
enabled in a past patch set.
Thanks.
--dd

+   default:
+   ret = -EREMOTEIO;
+   goto out_restore_sg;
+   }
+   }
+
/*
 * Due to the alignment of our single IOVA allocation, we can
 * depend on these assumptions about the segment boundary mask:
@@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
prev = s;
}
  
+	if (!iova_len)

+   return __finalise_sg(dev, sg, nents, 0);
+
iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), 

Re: [PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-05-06 Thread Logan Gunthorpe
Sorry, I think I missed responding to this one so here are the answers:

On 2021-05-02 7:14 p.m., John Hubbard wrote:
> On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is not
>> created if the scatterlist only consists of P2PDMA pages.
>>
>> Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
>> indicate bus address segments. On unmap, P2PDMA segments are skipped
>> over when determining the start and end IOVA addresses.
>>
>> With this change, the flags variable in the dma_map_ops is
>> set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
>> P2PDMA pages.
>>
>> Signed-off-by: Logan Gunthorpe 
>> ---
>>   drivers/iommu/dma-iommu.c | 66 ++-
>>   1 file changed, 58 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index af765c813cc8..ef49635f9819 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -20,6 +20,7 @@
>>   #include 
>>   #include 
>>   #include 
>> +#include 
>>   #include 
>>   #include 
>>   #include 
>> @@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev,
>> struct scatterlist *sg, int nents,
>>   sg_dma_address(s) = DMA_MAPPING_ERROR;
>>   sg_dma_len(s) = 0;
>>   +    if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
> 
> Newbie question: I'm in the dark as to why the !s_iova_len check is there,
> can you please enlighten me?

The loop in iommu_dma_map_sg() will decide what to do with P2PDMA pages.
If it is to map it with the bus address it will set s_iova_len to zero
so that no space is allocated in the IOVA. If it is to map it through
the host bridge, then it it will leave s_iova_len alone and create the
appropriate mapping with the CPU physical address.

This condition notices that s_iova_len was set to zero and fills in a SG
segment with the PCI bus address for that region.


> 
>> +    if (i > 0)
>> +    cur = sg_next(cur);
>> +
>> +    pci_p2pdma_map_bus_segment(s, cur);
>> +    count++;
>> +    cur_len = 0;
>> +    continue;
>> +    }
>> +
> 
> This is really an if/else condition. And arguably, it would be better
> to split out two subroutines, and call one or the other depending on
> the result of if is_pci_p2pdma_page(), instead of this "continue" approach.

I really disagree here. Putting the exceptional condition in it's own if
statement and leaving the normal case un-indented is easier to read and
understand. It also saves an extra level of indentation in code that is
already starting to look a little squished.


>>   /*
>>    * Now fill in the real DMA data. If...
>>    * - there is a valid output segment to append to
>> @@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev,
>> struct scatterlist *sg,
>>   struct iova_domain *iovad = &cookie->iovad;
>>   struct scatterlist *s, *prev = NULL;
>>   int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
>> +    struct dev_pagemap *pgmap = NULL;
>> +    enum pci_p2pdma_map_type map_type;
>>   dma_addr_t iova;
>>   size_t iova_len = 0;
>>   unsigned long mask = dma_get_seg_boundary(dev);
>> -    int i;
>> +    int i, ret = 0;
>>     if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
>>   iommu_deferred_attach(dev, domain))
>> @@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev,
>> struct scatterlist *sg,
>>   s_length = iova_align(iovad, s_length + s_iova_off);
>>   s->length = s_length;
>>   +    if (is_pci_p2pdma_page(sg_page(s))) {
>> +    if (sg_page(s)->pgmap != pgmap) {
>> +    pgmap = sg_page(s)->pgmap;
>> +    map_type = pci_p2pdma_map_type(pgmap, dev,
>> +   attrs);
>> +    }
>> +
>> +    switch (map_type) {
>> +    case PCI_P2PDMA_MAP_BUS_ADDR:
>> +    /*
>> + * A zero length will be ignored by
>> + * iommu_map_sg() and then can be detected
>> + * in __finalise_sg() to actually map the
>> + * bus address.
>> + */
>> +    s->length = 0;
>> +    continue;
>> +    case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
>> +    break;
>> +    default:
>> +    ret = -EREMOTEIO;
>> +    goto out_restore_sg;
>> +    }
>> +    }
>> +
>>   /*
>>    * Due to the alignment of our single IOVA allocation, we can
>>    * depend on these assumptions about the segment boundary mask:
>> @@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev,
>> struct scatterlist *sg,
>>   prev = s;
>>   }
>

Re: [PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-05-02 Thread John Hubbard

On 4/8/21 10:01 AM, Logan Gunthorpe wrote:

When a PCI P2PDMA page is seen, set the IOVA length of the segment
to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
apply the appropriate bus address to the segment. The IOVA is not
created if the scatterlist only consists of P2PDMA pages.

Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
indicate bus address segments. On unmap, P2PDMA segments are skipped
over when determining the start and end IOVA addresses.

With this change, the flags variable in the dma_map_ops is
set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
P2PDMA pages.

Signed-off-by: Logan Gunthorpe 
---
  drivers/iommu/dma-iommu.c | 66 ++-
  1 file changed, 58 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index af765c813cc8..ef49635f9819 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -20,6 +20,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct 
scatterlist *sg, int nents,
sg_dma_address(s) = DMA_MAPPING_ERROR;
sg_dma_len(s) = 0;
  
+		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {


Newbie question: I'm in the dark as to why the !s_iova_len check is there,
can you please enlighten me?


+   if (i > 0)
+   cur = sg_next(cur);
+
+   pci_p2pdma_map_bus_segment(s, cur);
+   count++;
+   cur_len = 0;
+   continue;
+   }
+


This is really an if/else condition. And arguably, it would be better
to split out two subroutines, and call one or the other depending on
the result of if is_pci_p2pdma_page(), instead of this "continue" approach.


/*
 * Now fill in the real DMA data. If...
 * - there is a valid output segment to append to
@@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
struct iova_domain *iovad = &cookie->iovad;
struct scatterlist *s, *prev = NULL;
int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
+   struct dev_pagemap *pgmap = NULL;
+   enum pci_p2pdma_map_type map_type;
dma_addr_t iova;
size_t iova_len = 0;
unsigned long mask = dma_get_seg_boundary(dev);
-   int i;
+   int i, ret = 0;
  
  	if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&

iommu_deferred_attach(dev, domain))
@@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
s_length = iova_align(iovad, s_length + s_iova_off);
s->length = s_length;
  
+		if (is_pci_p2pdma_page(sg_page(s))) {

+   if (sg_page(s)->pgmap != pgmap) {
+   pgmap = sg_page(s)->pgmap;
+   map_type = pci_p2pdma_map_type(pgmap, dev,
+  attrs);
+   }
+
+   switch (map_type) {
+   case PCI_P2PDMA_MAP_BUS_ADDR:
+   /*
+* A zero length will be ignored by
+* iommu_map_sg() and then can be detected
+* in __finalise_sg() to actually map the
+* bus address.
+*/
+   s->length = 0;
+   continue;
+   case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
+   break;
+   default:
+   ret = -EREMOTEIO;
+   goto out_restore_sg;
+   }
+   }
+
/*
 * Due to the alignment of our single IOVA allocation, we can
 * depend on these assumptions about the segment boundary mask:
@@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
prev = s;
}
  
+	if (!iova_len)

+   return __finalise_sg(dev, sg, nents, 0);
+


ohhh, we're really slicing up this function pretty severely, what with the
continue and the early out and several other control flow changes. I think
it would be better to spend some time factoring this function into two
cases, now that you're adding a second case for PCI P2PDMA. Roughly,
two subroutines would do it.

As it is, this leaves behind a routine that is extremely hard to mentally
verify as correct.


thanks,
--
John Hubbard
NVIDIA


iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
if (!iova)
goto out_restore_sg;
@@ -1032

Re: [PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-04-27 Thread Logan Gunthorpe



On 2021-04-27 1:43 p.m., Jason Gunthorpe wrote:
> On Thu, Apr 08, 2021 at 11:01:18AM -0600, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is not
>> created if the scatterlist only consists of P2PDMA pages.
> 
> I expect P2P to work with systems that use ATS, so we'd want to see
> those systems have the IOMMU programmed with the bus address.

Oh, the paragraph you quote isn't quite as clear as it could be. The bus
address is only used in specific circumstances depending on how the
P2PDMA core code figures the addresses should be mapped (see the
documentation for (upstream_bridge_distance()). The P2PDMA code
currently doesn't have any provisions for ATS (I haven't had access to
any such hardware) but I'm sure it wouldn't be too hard to add.

Logan
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-04-27 Thread Jason Gunthorpe
On Thu, Apr 08, 2021 at 11:01:18AM -0600, Logan Gunthorpe wrote:
> When a PCI P2PDMA page is seen, set the IOVA length of the segment
> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
> apply the appropriate bus address to the segment. The IOVA is not
> created if the scatterlist only consists of P2PDMA pages.

I expect P2P to work with systems that use ATS, so we'd want to see
those systems have the IOMMU programmed with the bus address.

Is it OK like this because the other logic prohibits all PCI cases
that would lean on the IOMMU, like ATS, hairpinning through the root
port, or transiting the root complex?

If yes, the code deserves a big comment explaining this is incomplete,
and I'd want to know we can finish this to include ATS at least based
on this series.

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 11/16] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg

2021-04-08 Thread Logan Gunthorpe
When a PCI P2PDMA page is seen, set the IOVA length of the segment
to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
apply the appropriate bus address to the segment. The IOVA is not
created if the scatterlist only consists of P2PDMA pages.

Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to
indicate bus address segments. On unmap, P2PDMA segments are skipped
over when determining the start and end IOVA addresses.

With this change, the flags variable in the dma_map_ops is
set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for
P2PDMA pages.

Signed-off-by: Logan Gunthorpe 
---
 drivers/iommu/dma-iommu.c | 66 ++-
 1 file changed, 58 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index af765c813cc8..ef49635f9819 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -864,6 +865,16 @@ static int __finalise_sg(struct device *dev, struct 
scatterlist *sg, int nents,
sg_dma_address(s) = DMA_MAPPING_ERROR;
sg_dma_len(s) = 0;
 
+   if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {
+   if (i > 0)
+   cur = sg_next(cur);
+
+   pci_p2pdma_map_bus_segment(s, cur);
+   count++;
+   cur_len = 0;
+   continue;
+   }
+
/*
 * Now fill in the real DMA data. If...
 * - there is a valid output segment to append to
@@ -961,10 +972,12 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
struct iova_domain *iovad = &cookie->iovad;
struct scatterlist *s, *prev = NULL;
int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
+   struct dev_pagemap *pgmap = NULL;
+   enum pci_p2pdma_map_type map_type;
dma_addr_t iova;
size_t iova_len = 0;
unsigned long mask = dma_get_seg_boundary(dev);
-   int i;
+   int i, ret = 0;
 
if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
iommu_deferred_attach(dev, domain))
@@ -993,6 +1006,31 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
s_length = iova_align(iovad, s_length + s_iova_off);
s->length = s_length;
 
+   if (is_pci_p2pdma_page(sg_page(s))) {
+   if (sg_page(s)->pgmap != pgmap) {
+   pgmap = sg_page(s)->pgmap;
+   map_type = pci_p2pdma_map_type(pgmap, dev,
+  attrs);
+   }
+
+   switch (map_type) {
+   case PCI_P2PDMA_MAP_BUS_ADDR:
+   /*
+* A zero length will be ignored by
+* iommu_map_sg() and then can be detected
+* in __finalise_sg() to actually map the
+* bus address.
+*/
+   s->length = 0;
+   continue;
+   case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
+   break;
+   default:
+   ret = -EREMOTEIO;
+   goto out_restore_sg;
+   }
+   }
+
/*
 * Due to the alignment of our single IOVA allocation, we can
 * depend on these assumptions about the segment boundary mask:
@@ -1015,6 +1053,9 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
prev = s;
}
 
+   if (!iova_len)
+   return __finalise_sg(dev, sg, nents, 0);
+
iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
if (!iova)
goto out_restore_sg;
@@ -1032,13 +1073,13 @@ static int iommu_dma_map_sg(struct device *dev, struct 
scatterlist *sg,
iommu_dma_free_iova(cookie, iova, iova_len, NULL);
 out_restore_sg:
__invalidate_sg(sg, nents);
-   return 0;
+   return ret;
 }
 
 static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, unsigned long attrs)
 {
-   dma_addr_t start, end;
+   dma_addr_t end, start = DMA_MAPPING_ERROR;
struct scatterlist *tmp;
int i;
 
@@ -1054,14 +1095,22 @@ static void iommu_dma_unmap_sg(struct device *dev, 
struct scatterlist *sg,
 * The scatterlist segments are mapped into a single
 * contiguous IOVA allocation, so this is incredibly easy.
 */
-   start = sg_dma_addre