Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread Christoph Hellwig
On Thu, Aug 06, 2020 at 08:47:55PM +0200, Nicolas Saenz Julienne wrote:
> There is no guarantee to CMA's placement, so allocating a zone specific
> atomic pool from CMA might return memory from a completely different
> memory zone. To get around this double check CMA's placement before
> allocating from it.

As the builtbot pointed out, memblock_start_of_DRAM can't be used from
non-__init code.  But lookig at it I think throwing that in
is bogus anyway, as cma_get_base returns a proper physical address
already.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


答复: 答复: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI device in RMRR

2020-08-06 Thread FelixCui-oc
Hi baolu,
I understand what you mean is that you want to put the 
following processing code in the acpi_device_create_direct_mappings() into the 
probe_acpi_namespace_devices() ,right?
If you mean it , I think it's OK. 

if (pn_dev == NULL) {
acpi_device->bus->iommu_ops = _iommu_ops;
ret = iommu_probe_device(acpi_device);
if (ret) {
pr_err("acpi_device probe fail! ret:%d\n", ret);
return ret;
}
return 0;
}

Best regards
Felix cui-oc





-邮件原件-
发件人: Lu Baolu  
发送时间: 2020年8月7日 9:08
收件人: FelixCui-oc ; Joerg Roedel ; 
iommu@lists.linux-foundation.org; linux-ker...@vger.kernel.org; David Woodhouse 

抄送: baolu...@linux.intel.com; RaymondPang-oc ; 
CobeChen-oc 
主题: Re: 答复: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI device 
in RMRR

Hi Felix,

On 2020/8/6 14:51, FelixCui-oc wrote:
> Hi  baolu,
>   >Sure. Before that, let me sync my understanding with you. You 
> have an acpi namespace device in ANDD table, it also shows up in the device 
> scope of a RMRR.
>   >Current code doesn't enumerate that device for the RMRR, hence 
> iommu_create_device_direct_mappings() doesn't work for this device.
> 
>   >At the same time, probe_acpi_namespace_devices() doesn't work 
> for this device, hence you want to add a home-made
>   >acpi_device_create_direct_mappings() helper.
> 
>   Your understanding is right.
>   But there is a problem that even if the namespace device in 
> rmrr is enumerated in the code, probe_acpi_namespace_devices() also doesn't 
> work for this device.
>   This is because the dev parameter of the 
> iommu_create_device_direct_mappings() is not the namespace device in RMRR.
>   The actual parameter passed in is the namespace device's 
> physical node device.
>   In iommu_create_device_direct_mappings(), the physical node 
> device passed in cannot match the namespace device in rmrr->device[],right?
>   We need acpi_device_create_direct_mappings() helper ?
> 
>   In addition, adev->physical_node_list is related to the __HID 
> of namespace device reported by the bios.
>   For example, if the __HID reported by the bios belongs to 
> acpi_pnp_device_ids[], adev->physical_node_list has no devices.
>   So in acpi_device_create_direct_mappings(), I added the case 
> that adev->physical_node_list is empty.

Got you. Thanks!

Have you ever tried to have probe_acpi_namespace_devices() handle the case of 
empty adev->physical_node_list at the same time?

Best regards,
baolu

> 
> 
> Best regards
> Felix cui
> 
> 
>   
> 
> -邮件原件-
> 发件人: Lu Baolu 
> 发送时间: 2020年8月6日 10:36
> 收件人: FelixCui-oc ; Joerg Roedel 
> ; iommu@lists.linux-foundation.org; 
> linux-ker...@vger.kernel.org; David Woodhouse 
> 抄送: baolu...@linux.intel.com; RaymondPang-oc 
> ; CobeChen-oc 
> 主题: Re: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI 
> device in RMRR
> 
> Hi Felix,
> 
> On 8/5/20 3:37 PM, FelixCui-oc wrote:
>> Hi baolu,
>>  Let me talk about why acpi_device_create_direct_mappings() is 
>> needed and please tell me if there is an error.
> 
> Sure. Before that, let me sync my understanding with you. You have an acpi 
> namespace device in ANDD table, it also shows up in the device scope of a 
> RMRR. Current code doesn't enumerate that device for the RMRR, hence 
> iommu_create_device_direct_mappings() doesn't work for this device.
> 
> At the same time, probe_acpi_namespace_devices() doesn't work for this 
> device, hence you want to add a home-made
> acpi_device_create_direct_mappings() helper.
> 
> Did I get it right?
> 
>>  In the probe_acpi_namespace_devices() function, only the device 
>> in the addev->physical_node_list is probed,
>>  but we need to establish identity mapping for the namespace 
>> device in RMRR. These are two different devices.
> 
> The namespace device has been probed and put in one drhd's device list.
> Hence, it should be processed by probe_acpi_namespace_devices(). So the 
> question is why there are no devices in addev->physical_node_list?
> 
>>  Therefore, the namespace device in RMRR is not mapped in 
>> probe_acpi_namespace_devices().
>>  acpi_device_create_direct_mappings() is to create direct 
>> mappings for namespace devices in RMRR.
> 
> Best regards,
> baolu
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH 11/21] iommu/mediatek: Add power-domain operation

2020-08-06 Thread Yong Wu
On Mon, 2020-07-27 at 16:49 +0800, chao hao wrote:
> On Sat, 2020-07-11 at 14:48 +0800, Yong Wu wrote:
> > In the previous SoC, the M4U HW is in the EMI power domain which is
> > always on. the latest M4U is in the display power domain which may be
> > turned on/off, thus we have to add pm_runtime interface for it.
> > 
> > we should enable its power before M4U hw initial. and disable it after HW
> > initialize.
> > 
> > When the engine work, the engine always enable the power and clocks for
> > smi-larb/smi-common, then the M4U's power will always be powered on
> > automatically via the device link with smi-common.
> > 
> > Note: we don't enable the M4U power in iommu_map/unmap for tlb flush.
> > If its power already is on, of course it is ok. if the power is off,
> > the main tlb will be reset while M4U power on, thus the tlb flush while
> > m4u power off is unnecessary, just skip it.
> > 
> > Signed-off-by: Yong Wu 

...

> >  
> > if (data->plat_data->m4u_plat == M4U_MT8173) {
> > @@ -728,7 +756,15 @@ static int mtk_iommu_probe(struct platform_device 
> > *pdev)
> >  
> > platform_set_drvdata(pdev, data);
> >  
> > +   if (dev->pm_domain)
> > +   pm_runtime_enable(dev);
> 
> hi yong,
> 
> If you put "pm_runtime_enable" here, it maybe not device_link with
> smi_common for previous patch: 
> if(i || !pm_runtime_enabled(dev))
> continue;
> 
> Whether put it up front?

Thanks for review. My fault here. I will fix it.

> 
> best regards,
> chao

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: 答复: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI device in RMRR

2020-08-06 Thread Lu Baolu

Hi Felix,

On 2020/8/6 14:51, FelixCui-oc wrote:

Hi  baolu,
>Sure. Before that, let me sync my understanding with you. You 
have an acpi namespace device in ANDD table, it also shows up in the device scope 
of a RMRR.
>Current code doesn't enumerate that device for the RMRR, hence 
iommu_create_device_direct_mappings() doesn't work for this device.

>At the same time, probe_acpi_namespace_devices() doesn't work 
for this device, hence you want to add a home-made
>acpi_device_create_direct_mappings() helper.

Your understanding is right.
But there is a problem that even if the namespace device in 
rmrr is enumerated in the code, probe_acpi_namespace_devices() also doesn't 
work for this device.
This is because the dev parameter of the 
iommu_create_device_direct_mappings() is not the namespace device in RMRR.
The actual parameter passed in is the namespace device's 
physical node device.
In iommu_create_device_direct_mappings(), the physical node device 
passed in cannot match the namespace device in rmrr->device[],right?
We need acpi_device_create_direct_mappings() helper ?

In addition, adev->physical_node_list is related to the __HID 
of namespace device reported by the bios.
For example, if the __HID reported by the bios belongs to 
acpi_pnp_device_ids[], adev->physical_node_list has no devices.
So in acpi_device_create_direct_mappings(), I added the case that 
adev->physical_node_list is empty.


Got you. Thanks!

Have you ever tried to have probe_acpi_namespace_devices() handle the
case of empty adev->physical_node_list at the same time?

Best regards,
baolu




Best regards
Felix cui


  


-邮件原件-
发件人: Lu Baolu 
发送时间: 2020年8月6日 10:36
收件人: FelixCui-oc ; Joerg Roedel ; 
iommu@lists.linux-foundation.org; linux-ker...@vger.kernel.org; David Woodhouse 

抄送: baolu...@linux.intel.com; RaymondPang-oc ; 
CobeChen-oc 
主题: Re: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI device in 
RMRR

Hi Felix,

On 8/5/20 3:37 PM, FelixCui-oc wrote:

Hi baolu,
Let me talk about why acpi_device_create_direct_mappings() is 
needed and please tell me if there is an error.


Sure. Before that, let me sync my understanding with you. You have an acpi 
namespace device in ANDD table, it also shows up in the device scope of a RMRR. 
Current code doesn't enumerate that device for the RMRR, hence 
iommu_create_device_direct_mappings() doesn't work for this device.

At the same time, probe_acpi_namespace_devices() doesn't work for this device, 
hence you want to add a home-made
acpi_device_create_direct_mappings() helper.

Did I get it right?


In the probe_acpi_namespace_devices() function, only the device in 
the addev->physical_node_list is probed,
but we need to establish identity mapping for the namespace 
device in RMRR. These are two different devices.


The namespace device has been probed and put in one drhd's device list.
Hence, it should be processed by probe_acpi_namespace_devices(). So the question 
is why there are no devices in addev->physical_node_list?


Therefore, the namespace device in RMRR is not mapped in 
probe_acpi_namespace_devices().
acpi_device_create_direct_mappings() is to create direct 
mappings for namespace devices in RMRR.


Best regards,
baolu


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread kernel test robot
Hi Nicolas,

I love your patch! Perhaps something to improve:

[auto build test WARNING on iommu/next]
[also build test WARNING on linus/master v5.8]
[cannot apply to next-20200806]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Nicolas-Saenz-Julienne/dma-pool-fixes/20200807-025101
base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: x86_64-randconfig-s031-20200806 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.2-117-g8c7aee71-dirty
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 

All warnings (new ones prefixed by >>, old ones prefixed by <<):

>> WARNING: modpost: vmlinux.o(.text+0x2840fa): Section mismatch in reference 
>> from the function atomic_pool_expand() to the function 
>> .meminit.text:memblock_start_of_DRAM()
The function atomic_pool_expand() references
the function __meminit memblock_start_of_DRAM().
This is often because atomic_pool_expand lacks a __meminit
annotation or the annotation of memblock_start_of_DRAM is wrong.

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH v3 1/2] dma-pool: fix coherent pool allocations for IOMMU mappings

2020-08-06 Thread Nicolas Saenz Julienne
From: Christoph Hellwig 

When allocating coherent pool memory for an IOMMU mapping we don't care
about the DMA mask.  Move the guess for the initial GFP mask into the
dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer
argument so that it doesn't get applied to the IOMMU case.

Signed-off-by: Christoph Hellwig 
---

Changes since v1:
  - Check if phys_addr_ok() exists prior calling it

 drivers/iommu/dma-iommu.c   |   4 +-
 include/linux/dma-direct.h  |   3 -
 include/linux/dma-mapping.h |   5 +-
 kernel/dma/direct.c |  13 ++--
 kernel/dma/pool.c   | 114 +++-
 5 files changed, 62 insertions(+), 77 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4959f5df21bd..5141d49a046b 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1035,8 +1035,8 @@ static void *iommu_dma_alloc(struct device *dev, size_t 
size,
 
if (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
!gfpflags_allow_blocking(gfp) && !coherent)
-   cpu_addr = dma_alloc_from_pool(dev, PAGE_ALIGN(size), ,
-  gfp);
+   page = dma_alloc_from_pool(dev, PAGE_ALIGN(size), _addr,
+  gfp, NULL);
else
cpu_addr = iommu_dma_alloc_pages(dev, size, , gfp, attrs);
if (!cpu_addr)
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 5a3ce2a24794..6e87225600ae 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -73,9 +73,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t 
addr, size_t size,
 }
 
 u64 dma_direct_get_required_mask(struct device *dev);
-gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
- u64 *phys_mask);
-bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size);
 void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t gfp, unsigned long attrs);
 void dma_direct_free(struct device *dev, size_t size, void *cpu_addr,
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 016b96b384bd..52635e91143b 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -522,8 +522,9 @@ void *dma_common_pages_remap(struct page **pages, size_t 
size,
pgprot_t prot, const void *caller);
 void dma_common_free_remap(void *cpu_addr, size_t size);
 
-void *dma_alloc_from_pool(struct device *dev, size_t size,
- struct page **ret_page, gfp_t flags);
+struct page *dma_alloc_from_pool(struct device *dev, size_t size,
+   void **cpu_addr, gfp_t flags,
+   bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t));
 bool dma_free_from_pool(struct device *dev, void *start, size_t size);
 
 int
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index bb0041e99659..db6ef07aec3b 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -43,7 +43,7 @@ u64 dma_direct_get_required_mask(struct device *dev)
return (1ULL << (fls64(max_dma) - 1)) * 2 - 1;
 }
 
-gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
+static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
  u64 *phys_limit)
 {
u64 dma_limit = min_not_zero(dma_mask, dev->bus_dma_limit);
@@ -68,7 +68,7 @@ gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 
dma_mask,
return 0;
 }
 
-bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
+static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 {
return phys_to_dma_direct(dev, phys) + size - 1 <=
min_not_zero(dev->coherent_dma_mask, 
dev->bus_dma_limit);
@@ -161,8 +161,13 @@ void *dma_direct_alloc_pages(struct device *dev, size_t 
size,
size = PAGE_ALIGN(size);
 
if (dma_should_alloc_from_pool(dev, gfp, attrs)) {
-   ret = dma_alloc_from_pool(dev, size, , gfp);
-   if (!ret)
+   u64 phys_mask;
+
+   gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
+   _mask);
+   page = dma_alloc_from_pool(dev, size, , gfp,
+   dma_coherent_ok);
+   if (!page)
return NULL;
goto done;
}
diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 6bc74a2d5127..5d071d4a3cba 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -196,93 +196,75 @@ static int __init dma_atomic_pool_init(void)
 }
 postcore_initcall(dma_atomic_pool_init);
 
-static inline struct gen_pool *dma_guess_pool_from_device(struct device *dev)
+static inline struct gen_pool *dma_guess_pool(struct gen_pool *prev, gfp_t gfp)
 {
-   u64 phys_mask;
-   gfp_t gfp;
-
-   gfp = 

[PATCH v3 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread Nicolas Saenz Julienne
There is no guarantee to CMA's placement, so allocating a zone specific
atomic pool from CMA might return memory from a completely different
memory zone. To get around this double check CMA's placement before
allocating from it.

Signed-off-by: Nicolas Saenz Julienne 
[hch: rebased, added a fallback to the page allocator, allow dipping into
  lower CMA pools]
Signed-off-by: Christoph Hellwig 
---

Changes since v2:
 - Go back to v1 behavior

 kernel/dma/pool.c | 31 ++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 5d071d4a3cba..30d28d761490 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -3,7 +3,9 @@
  * Copyright (C) 2012 ARM Ltd.
  * Copyright (C) 2020 Google LLC
  */
+#include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -55,6 +57,29 @@ static void dma_atomic_pool_size_add(gfp_t gfp, size_t size)
pool_size_kernel += size;
 }
 
+static bool cma_in_zone(gfp_t gfp)
+{
+   unsigned long size;
+   phys_addr_t end;
+   struct cma *cma;
+
+   cma = dev_get_cma_area(NULL);
+   if (!cma)
+   return false;
+
+   size = cma_get_size(cma);
+   if (!size)
+   return false;
+
+   /* CMA can't cross zone boundaries, see cma_activate_area() */
+   end = cma_get_base(cma) - memblock_start_of_DRAM() + size - 1;
+   if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA))
+   return end <= DMA_BIT_MASK(zone_dma_bits);
+   if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32))
+   return end <= DMA_BIT_MASK(32);
+   return true;
+}
+
 static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
  gfp_t gfp)
 {
@@ -68,7 +93,11 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t 
pool_size,
 
do {
pool_size = 1 << (PAGE_SHIFT + order);
-   page = alloc_pages(gfp, order);
+   if (cma_in_zone(gfp))
+   page = dma_alloc_from_contiguous(NULL, 1 << order,
+order, false);
+   if (!page)
+   page = alloc_pages(gfp, order);
} while (!page && order-- > 0);
if (!page)
goto out;
-- 
2.28.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v3 0/2] dma-pool fixes

2020-08-06 Thread Nicolas Saenz Julienne
Now that we have an explanation to Amir's issue, we can re-spin this
series.

---
Changes since v2:
 - Go back to v1's behavior for patch #2

Changes since v1:
 - Make cma_in_zone() more strict, GFP_KERNEL doesn't default to true
   now

 - Check if phys_addr_ok() exists prior calling it

Christoph Hellwig (1):
  dma-pool: fix coherent pool allocations for IOMMU mappings

Nicolas Saenz Julienne (1):
  dma-pool: Only allocate from CMA when in same memory zone

 drivers/iommu/dma-iommu.c   |   4 +-
 include/linux/dma-direct.h  |   3 -
 include/linux/dma-mapping.h |   5 +-
 kernel/dma/direct.c |  13 +++-
 kernel/dma/pool.c   | 145 +++-
 5 files changed, 92 insertions(+), 78 deletions(-)

-- 
2.28.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/3] iommu/tegra-smmu: Set IOMMU group name

2020-08-06 Thread Thierry Reding
From: Thierry Reding 

Set the name of static IOMMU groups to help with debugging.

Signed-off-by: Thierry Reding 
---
 drivers/iommu/tegra-smmu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 124c8848ab7e..1ffdafe892d9 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -847,6 +847,7 @@ static struct iommu_group *tegra_smmu_group_get(struct 
tegra_smmu *smmu,
return NULL;
}
 
+   iommu_group_set_name(group->group, soc->name);
list_add_tail(>list, >groups);
mutex_unlock(>lock);
 
-- 
2.27.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 0/3] iommu/tegra-smmu: Reference count fixes

2020-08-06 Thread Thierry Reding
From: Thierry Reding 

I was running into some use-after-free conditions after recent changes
to the host1x driver cause the child devices to be destroyed upon driver
unloading. This in turn caused the IOMMU groups associated with the
child devices to also get released and that uncovered a subtle reference
count unbalance.

This contains two fixes for these issues and also includes a patch that
sets the IOMMU group name for "static" groups to help with debugging.

Thierry

Thierry Reding (3):
  iommu/tegra-smmu: Set IOMMU group name
  iommu/tegra-smmu: Balance IOMMU group reference count
  iommu/tegra-smmu: Prune IOMMU group when it is released

 drivers/iommu/tegra-smmu.c | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

-- 
2.27.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 2/3] iommu/tegra-smmu: Balance IOMMU group reference count

2020-08-06 Thread Thierry Reding
From: Thierry Reding 

For groups that are shared between multiple devices, care must be taken
to acquire a reference for each device, otherwise the IOMMU core ends up
dropping the last reference too early, which will cause the group to be
released while consumers may still be thinking that they're holding a
reference to it.

Signed-off-by: Thierry Reding 
---
 drivers/iommu/tegra-smmu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 1ffdafe892d9..c439c0929ef8 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -818,6 +818,7 @@ static struct iommu_group *tegra_smmu_group_get(struct 
tegra_smmu *smmu,
 {
const struct tegra_smmu_group_soc *soc;
struct tegra_smmu_group *group;
+   struct iommu_group *grp;
 
soc = tegra_smmu_find_group(smmu, swgroup);
if (!soc)
@@ -827,8 +828,9 @@ static struct iommu_group *tegra_smmu_group_get(struct 
tegra_smmu *smmu,
 
list_for_each_entry(group, >groups, list)
if (group->soc == soc) {
+   grp = iommu_group_ref_get(group->group);
mutex_unlock(>lock);
-   return group->group;
+   return grp;
}
 
group = devm_kzalloc(smmu->dev, sizeof(*group), GFP_KERNEL);
-- 
2.27.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 3/3] iommu/tegra-smmu: Prune IOMMU group when it is released

2020-08-06 Thread Thierry Reding
From: Thierry Reding 

In order to share groups between multiple devices we keep track of them
in a per-SMMU list. When an IOMMU group is released, a dangling pointer
to it stays around in that list. Fix this by implementing an IOMMU data
release callback for groups where the dangling pointer can be removed.

Signed-off-by: Thierry Reding 
---
 drivers/iommu/tegra-smmu.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index c439c0929ef8..2574e716086b 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -19,6 +19,7 @@
 
 struct tegra_smmu_group {
struct list_head list;
+   struct tegra_smmu *smmu;
const struct tegra_smmu_group_soc *soc;
struct iommu_group *group;
 };
@@ -813,6 +814,16 @@ tegra_smmu_find_group(struct tegra_smmu *smmu, unsigned 
int swgroup)
return NULL;
 }
 
+static void tegra_smmu_group_release(void *iommu_data)
+{
+   struct tegra_smmu_group *group = iommu_data;
+   struct tegra_smmu *smmu = group->smmu;
+
+   mutex_lock(>lock);
+   list_del(>list);
+   mutex_unlock(>lock);
+}
+
 static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu,
unsigned int swgroup)
 {
@@ -840,6 +851,7 @@ static struct iommu_group *tegra_smmu_group_get(struct 
tegra_smmu *smmu,
}
 
INIT_LIST_HEAD(>list);
+   group->smmu = smmu;
group->soc = soc;
 
group->group = iommu_group_alloc();
@@ -849,6 +861,7 @@ static struct iommu_group *tegra_smmu_group_get(struct 
tegra_smmu *smmu,
return NULL;
}
 
+   iommu_group_set_iommudata(group->group, group, 
tegra_smmu_group_release);
iommu_group_set_name(group->group, soc->name);
list_add_tail(>list, >groups);
mutex_unlock(>lock);
-- 
2.27.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread Christoph Hellwig
On Thu, Aug 06, 2020 at 01:50:29PM +0200, Nicolas Saenz Julienne wrote:
> > The latter is pretty much what I expect, as we only support the default and
> > per-device DMA CMAs.
> 
> Fair enough, should I send a v3 with everything cleaned-up/rebased, or you'd
> rather pick it up from your version?

Please just resend the whole thing. 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


upstream boot error: general protection fault in swiotlb_map

2020-08-06 Thread syzbot
Hello,

syzbot found the following issue on:

HEAD commit:47ec5303 Merge git://git.kernel.org/pub/scm/linux/kernel/g..
git tree:   upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16fe1dea90
kernel config:  https://syzkaller.appspot.com/x/.config?x=7c06047f622c5724
dashboard link: https://syzkaller.appspot.com/bug?extid=3f86afd0b1e4bf1cb64c
compiler:   gcc (GCC) 10.1.0-syz 20200507

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+3f86afd0b1e4bf1cb...@syzkaller.appspotmail.com

ceph: loaded (mds proto 32)
NET: Registered protocol family 38
async_tx: api initialized (async)
Key type asymmetric registered
Asymmetric key parser 'x509' registered
Asymmetric key parser 'pkcs8' registered
Key type pkcs7_test registered
Asymmetric key parser 'tpm_parser' registered
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 243)
io scheduler mq-deadline registered
io scheduler kyber registered
io scheduler bfq registered
hgafb: HGA card not detected.
hgafb: probe of hgafb.0 failed with error -22
usbcore: registered new interface driver udlfb
uvesafb: failed to execute /sbin/v86d
uvesafb: make sure that the v86d helper is installed and executable
uvesafb: Getting VBE info block failed (eax=0x4f00, err=-2)
uvesafb: vbe_init() failed with -22
uvesafb: probe of uvesafb.0 failed with error -22
vga16fb: mapped to 0x8aac772d
Console: switching to colour frame buffer device 80x30
fb0: VGA16 VGA frame buffer device
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
ioatdma: Intel(R) QuickData Technology Driver 5.00
PCI Interrupt Link [GSIF] enabled at IRQ 21
PCI Interrupt Link [GSIG] enabled at IRQ 22
PCI Interrupt Link [GSIH] enabled at IRQ 23
N_HDLC line discipline registered with maxframe=4096
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Cyclades driver 2.6
Initializing Nozomi driver 2.1d
RocketPort device driver module, version 2.09, 12-June-2003
No rocketport ports found; unloading driver
Non-volatile memory driver v1.3
Linux agpgart interface v0.103
[drm] Initialized vgem 1.0.0 20120112 for vgem on minor 0
[drm] Initialized vkms 1.0.0 20180514 for vkms on minor 1
usbcore: registered new interface driver udl
[drm] pci: virtio-vga detected at :00:01.0
fb0: switching to virtiodrmfb from VGA16 VGA
Console: switching to colour VGA+ 80x25
virtio-pci :00:01.0: vgaarb: deactivate vga console
Console: switching to colour dummy device 80x25
[drm] features: -virgl +edid
[drm] number of scanouts: 1
[drm] number of cap sets: 0
[drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 2
general protection fault, probably for non-canonical address 
0xdc00:  [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x-0x0007]
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.0-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 
rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
RIP: 0010:swiotlb_map+0x5ac/0x700 kernel/dma/swiotlb.c:683
Code: 28 04 00 00 48 c1 ea 03 80 3c 02 00 0f 85 4d 01 00 00 4c 8b a5 18 04 00 
00 48 b8 00 00 00 00 00 fc ff df 4c 89 e2 48 c1 ea 03 <80> 3c 02 00 0f 85 1e 01 
00 00 48 8d 7d 50 4d 8b 24 24 48 b8 00 00
RSP: :c934f3e0 EFLAGS: 00010246
RAX: dc00 RBX:  RCX: 8162cc1d
RDX:  RSI: 8162cc98 RDI: 88802971a470
RBP: 88802971a048 R08: 0001 R09: 8c5dba77
R10:  R11:  R12: 
R13: 7ac0 R14: dc00 R15: 1000
FS:  () GS:88802ce0() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2:  CR3: 09a8d000 CR4: 00350ef0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400
Call Trace:
 dma_direct_map_page include/linux/dma-direct.h:170 [inline]
 dma_direct_map_sg+0x3bb/0x670 kernel/dma/direct.c:368
 dma_map_sg_attrs+0xd0/0x160 kernel/dma/mapping.c:183
 drm_gem_shmem_get_pages_sgt drivers/gpu/drm/drm_gem_shmem_helper.c:700 [inline]
 drm_gem_shmem_get_pages_sgt+0x1fc/0x310 
drivers/gpu/drm/drm_gem_shmem_helper.c:679
 virtio_gpu_object_shmem_init drivers/gpu/drm/virtio/virtgpu_object.c:153 
[inline]
 virtio_gpu_object_create+0x2fd/0xa70 
drivers/gpu/drm/virtio/virtgpu_object.c:232
 virtio_gpu_gem_create drivers/gpu/drm/virtio/virtgpu_gem.c:45 [inline]
 virtio_gpu_mode_dumb_create+0x298/0x530 drivers/gpu/drm/virtio/virtgpu_gem.c:85
 drm_mode_create_dumb+0x27c/0x300 drivers/gpu/drm/drm_dumb_buffers.c:94
 drm_client_buffer_create drivers/gpu/drm/drm_client.c:267 [inline]
 drm_client_framebuffer_create+0x1b7/0x770 drivers/gpu/drm/drm_client.c:412
 drm_fb_helper_generic_probe+0x1e5/0x810 

Re: [PATCH v2 2/2] dma-pool: Only allocate from CMA when in same memory zone

2020-08-06 Thread Nicolas Saenz Julienne
On Thu, 2020-08-06 at 07:18 +0200, Christoph Hellwig wrote:
> On Tue, Aug 04, 2020 at 11:43:15AM +0200, Nicolas Saenz Julienne wrote:
> > > Second I don't see the need (and actually some harm) in preventing 
> > > GFP_KERNEL
> > > allocations from dipping into lower CMA areas - something that we did 
> > > support
> > > before 5.8 with the single pool.
> > 
> > My thinking is the least we pressure CMA the better, it's generally scarse, 
> > and
> > it'll not grow as the atomic pools grow. As far as harm is concerned, we now
> > check addresses for correctness, so we shouldn't run into problems.
> > 
> > There is a potential case for architectures defining a default CMA but not
> > defining DMA zones where this could be problematic. But isn't that just 
> > plain
> > abusing CMA? If you need low memory allocations, you should be defining DMA
> > zones.
> 
> The latter is pretty much what I expect, as we only support the default and
> per-device DMA CMAs.

Fair enough, should I send a v3 with everything cleaned-up/rebased, or you'd
rather pick it up from your version?



signature.asc
Description: This is a digitally signed message part
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC PATCH] dma-iommu: allow devices to set IOVA range dynamically

2020-08-06 Thread Christoph Hellwig
I'm not entirely sure this is the right mechanism.  Can you please
send it along with your intended user so that we can get a better
picture?  Also the export needs to be a _GPL one for any kind of
functionality like this,
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


答复: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI device in RMRR

2020-08-06 Thread FelixCui-oc
Hi  baolu,
>Sure. Before that, let me sync my understanding with you. You 
have an acpi namespace device in ANDD table, it also shows up in the device 
scope of a RMRR. 
>Current code doesn't enumerate that device for the RMRR, hence 
iommu_create_device_direct_mappings() doesn't work for this device.

>At the same time, probe_acpi_namespace_devices() doesn't work 
for this device, hence you want to add a home-made
>acpi_device_create_direct_mappings() helper.

Your understanding is right. 
But there is a problem that even if the namespace device in 
rmrr is enumerated in the code, probe_acpi_namespace_devices() also doesn't 
work for this device.
This is because the dev parameter of the 
iommu_create_device_direct_mappings() is not the namespace device in RMRR.
The actual parameter passed in is the namespace device's 
physical node device.
In iommu_create_device_direct_mappings(), the physical node 
device passed in cannot match the namespace device in rmrr->device[],right?
We need acpi_device_create_direct_mappings() helper ? 

In addition, adev->physical_node_list is related to the __HID 
of namespace device reported by the bios.
For example, if the __HID reported by the bios belongs to 
acpi_pnp_device_ids[], adev->physical_node_list has no devices.
So in acpi_device_create_direct_mappings(), I added the case 
that adev->physical_node_list is empty.


Best regards
Felix cui


 

-邮件原件-
发件人: Lu Baolu  
发送时间: 2020年8月6日 10:36
收件人: FelixCui-oc ; Joerg Roedel ; 
iommu@lists.linux-foundation.org; linux-ker...@vger.kernel.org; David Woodhouse 

抄送: baolu...@linux.intel.com; RaymondPang-oc ; 
CobeChen-oc 
主题: Re: 答复: 答复: 答复: 答复: 答复: [PATCH] iommu/vt-d:Add support for ACPI device in 
RMRR

Hi Felix,

On 8/5/20 3:37 PM, FelixCui-oc wrote:
> Hi baolu,
>   Let me talk about why acpi_device_create_direct_mappings() is 
> needed and please tell me if there is an error.

Sure. Before that, let me sync my understanding with you. You have an acpi 
namespace device in ANDD table, it also shows up in the device scope of a RMRR. 
Current code doesn't enumerate that device for the RMRR, hence 
iommu_create_device_direct_mappings() doesn't work for this device.

At the same time, probe_acpi_namespace_devices() doesn't work for this device, 
hence you want to add a home-made
acpi_device_create_direct_mappings() helper.

Did I get it right?

>   In the probe_acpi_namespace_devices() function, only the device 
> in the addev->physical_node_list is probed,
>   but we need to establish identity mapping for the namespace 
> device in RMRR. These are two different devices.

The namespace device has been probed and put in one drhd's device list.
Hence, it should be processed by probe_acpi_namespace_devices(). So the 
question is why there are no devices in addev->physical_node_list?

>   Therefore, the namespace device in RMRR is not mapped in 
> probe_acpi_namespace_devices().
>   acpi_device_create_direct_mappings() is to create direct 
> mappings for namespace devices in RMRR.

Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu