On Fri, Nov 08, 2024 at 04:20:30PM +, Fuad Tabba wrote:
> Some folios, such as hugetlb folios and zone device folios,
> require special handling when the folio's reference count reaches
> 0, before being freed. Moreover, guest_memfd folios will likely
> require special handling to notify it onc
On Wed, Nov 06, 2024 at 03:53:23PM +, Will Deacon wrote:
> On Tue, 05 Nov 2024 14:14:23 -0400, Jason Gunthorpe wrote:
> > This is the result of the discussion on removing split. We agreed that
> > split is not required, and no application should ask for anything that
> > w
ality.
Outside the iommu users, this will potentially effect io_pgtable users of
ARM_32_LPAE_S1, ARM_32_LPAE_S2, ARM_64_LPAE_S1, ARM_64_LPAE_S2, and
ARM_MALI_LPAE formats.
Cc: Boris Brezillon
Cc: Steven Price
Cc: Liviu Dudau
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Liviu Dudau
Signe
- Add arm-v7s patch
- Write a kdoc for iommu_unmap()
v1: https://patch.msgid.link/r/0-v1-8c5f369ec2e5+75-arm_no_split_...@nvidia.com
Jason Gunthorpe (3):
iommu/io-pgtable-arm: Remove split on unmap behavior
iommu/io-pgtable-arm-v7s: Remove split on unmap behavior
iommu: Add a kdoc to iommu_unmap
1.ga6...@nvidia.com/
Bring consistency to the implementations and remove this unused
functionality.
There are no uses outside iommu, this effects the ARM_V7S drivers
msm_iommu, mtk_iommu, and arm-smmmu.
Signed-off-by: Jason Gunthorpe
---
drivers/iommu/io-pgtable-arm-v7s.c
iewed-by: Liviu Dudau
Signed-off-by: Jason Gunthorpe
---
drivers/iommu/iommu.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 83c8e617a2c588..19b177720d3aca 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2
On Tue, Nov 05, 2024 at 04:59:43PM +, Will Deacon wrote:
> > /* Full unmap */
> > iova = 0;
> > for_each_set_bit(i, &cfg.pgsize_bitmap, BITS_PER_LONG) {
>
> Yup, and you can do the same for the other selftest in io-pgtable-arm.c
Ugh, yes, I ran it and thought the log it printed wa
On Mon, Nov 04, 2024 at 07:53:46PM +, Robin Murphy wrote:
> On 2024-11-04 5:41 pm, Jason Gunthorpe wrote:
> > A minority of page table implementations (arm_lpae, armv7) are unique in
> > how they handle partial unmap of large IOPTEs.
> >
> > Other implementations
igned-off-by: Jason Gunthorpe
---
drivers/iommu/iommu.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 83c8e617a2c588..d3cf7cc69c797c 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2586,6 +2586,20 @@ static s
ality.
Outside the iommu users, this will potentially effect io_pgtable users of
ARM_32_LPAE_S1, ARM_32_LPAE_S2, ARM_64_LPAE_S1, ARM_64_LPAE_S2, and
ARM_MALI_LPAE formats.
Cc: Boris Brezillon
Cc: Steven Price
Cc: Liviu Dudau
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Jason Gunthorpe
---
1.ga6...@nvidia.com/
Bring consistency to the implementations and remove this unused
functionality.
There are no uses outside iommu, this effects the ARM_V7S drivers
msm_iommu, mtk_iommu, and arm-smmmu.
Signed-off-by: Jason Gunthorpe
---
drivers/iommu/io-pgtable-arm-v7s.c
do of removing the whole
IOPTE and returning 0.
The kdoc is updated to describe this.
v2:
- Use WARN_ON instead of duplicating AMD behavior
- Add arm-v7s patch
- Write a kdoc for iommu_unmap()
v1: https://patch.msgid.link/r/0-v1-8c5f369ec2e5+75-arm_no_split_...@nvidia.com
Jason Gunthorpe (3
sysfs.c | 2 +-
> drivers/usb/core/sysfs.c| 2 +-
> include/linux/sysfs.h | 30 +++---
> 12 files changed, 27 insertions(+), 26 deletions(-)
For infiniband:
Acked-by: Jason Gunthorpe
On Fri, Nov 01, 2024 at 11:58:29AM +, Will Deacon wrote:
> On Fri, Oct 18, 2024 at 02:19:26PM -0300, Jason Gunthorpe wrote:
> > Of the page table implementations (AMD v1/2, VT-D SS, ARM32, DART)
> > arm_lpae is unique in how it handles partial unmap of large IOPTEs.
> >
On Fri, Oct 18, 2024 at 02:19:26PM -0300, Jason Gunthorpe wrote:
> Of the page table implementations (AMD v1/2, VT-D SS, ARM32, DART)
> arm_lpae is unique in how it handles partial unmap of large IOPTEs.
>
> All other drivers will unmap the large IOPTE and return it's length. F
On Thu, Oct 24, 2024 at 02:05:53PM +0100, Will Deacon wrote:
> My recollection is hazy, but I seem to remember VFIO using the largest
> page sizes in the IOMMU 'pgsize_bitmap' for map() requests but then
> using the smallest page size for unmap() requests, so you'd end up
> cracking block mappings
On Mon, Oct 21, 2024 at 02:50:34PM +0100, Robin Murphy wrote:
> Beware that whatever the Mali drivers might have the option to do for
> themselves, there's still no notion of "atomic update" for SMMU and
> io-pgtable-arm in general, other than perhaps for permission changes - even
> BBML is quite
On Mon, Oct 21, 2024 at 12:32:21PM +0100, Steven Price wrote:
> > that, we can always do it in two steps (unmap the 2M region and remap
> > the borders). At some point it'd be good to have some kind of atomic
> > page table updates, so we don't have this short period of time during
> > which nothi
otentially effect io_pgtable users of
ARM_32_LPAE_S1, ARM_32_LPAE_S2, ARM_64_LPAE_S1, ARM_64_LPAE_S2, and
ARM_MALI_LPAE formats.
Cc: Boris Brezillon
Cc: Steven Price
Cc: Liviu Dudau
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Jason Gunthorpe
---
drivers/iommu/io-pgtable-
On Thu, Oct 17, 2024 at 06:49:30AM -0700, Christoph Hellwig wrote:
> On Thu, Oct 17, 2024 at 10:46:44AM -0300, Jason Gunthorpe wrote:
> > On Thu, Oct 17, 2024 at 06:12:55AM -0700, Christoph Hellwig wrote:
> > > On Thu, Oct 17, 2024 at 10:05:39AM -0300, Jason Gunthorpe wrote:
On Thu, Oct 17, 2024 at 06:12:55AM -0700, Christoph Hellwig wrote:
> On Thu, Oct 17, 2024 at 10:05:39AM -0300, Jason Gunthorpe wrote:
> > Broadly I think whatever flow NVMe uses for P2P will apply to ODP as
> > well.
>
> ODP is a lot simpler than NVMe for P2P actually :(
On Thu, Oct 17, 2024 at 04:58:12AM -0700, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 02:44:45PM -0300, Jason Gunthorpe wrote:
> > > > FWIW, I've been expecting this series to be rebased on top of Leon's
> > > > new DMA API series so it doesn't
On Thu, Oct 17, 2024 at 12:58:48PM +1100, Alistair Popple wrote:
> Actually I think the rule should be don't look at the page at
> all. hmm_range_fault() is about mirroring PTEs, no assumption should
> even be made about the existence or otherwise of a struct page.
We are not there yet..
> > We
On Wed, Oct 16, 2024 at 09:41:03AM -0700, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 12:44:28PM -0300, Jason Gunthorpe wrote:
> > > We are talking about P2P memory here. How do you manage to get a page
> > > that dma_map_page can be used on? All P2P memory
On Wed, Oct 16, 2024 at 04:10:53PM +1100, Alistair Popple wrote:
> On that note how is the refcounting of the returned p2pdma page expected
> to work? We don't want the driver calling hmm_range_fault() to be able
> to pin the page with eg. get_page(), so the returned p2pdma page should
> have a zer
On Tue, Oct 15, 2024 at 09:49:30PM -0700, Christoph Hellwig wrote:
> > + /*
> > +* Used for private (un-addressable) device memory only. Return a
> > +* corresponding struct page, that can be mapped to device
> > +* (e.g using dma_map_page)
> > +*/
> > + struct page *(*get_dma_
On Tue, Oct 15, 2024 at 02:41:24PM +0200, Thomas Hellström wrote:
> > It has nothing to do with kernel P2P, you are just allowing more
> > selective filtering of dev_private_owner. You should focus on that in
> > the naming, not p2p. ie allow_dev_private()
> >
> > P2P is stuff that is dealing with
On Tue, Oct 15, 2024 at 01:13:22PM +0200, Thomas Hellström wrote:
> Introduce a way for hmm_range_fault() and migrate_vma_setup() to identify
> foreign devices with fast interconnect and thereby allow
> both direct access over the interconnect and p2p migration.
>
> The need for a callback arises
On Mon, Sep 16, 2024 at 04:42:33PM -0400, Lyude Paul wrote:
> Sigh. Took me a minute but I think I know what happened - I meant to push the
> entire series to drm-misc-next and not drm-misc-fixes, but I must have misread
> or typo'd the branch name and pushed the second half of patches to drm-misc-
On Thu, Sep 05, 2024 at 12:26:31PM -0400, Lyude Paul wrote:
> I did take the one patch - but I'm happy to push the others to drm-misc
> (provided they all get reviewed. 2/3 seems to be reviewed already but not 3/3)
Did it get lost?
$ git reset --hard next-20240913
$ git grep 'iommu_domain_alloc('
On Thu, Sep 05, 2024 at 12:26:31PM -0400, Lyude Paul wrote:
> I did take the one patch - but I'm happy to push the others to drm-misc
> (provided they all get reviewed. 2/3 seems to be reviewed already but not 3/3)
The whole series is acked now, can you pick it up please?
Thanks,
Jason
On Wed, Sep 04, 2024 at 03:06:07PM -0400, Lyude Paul wrote:
> Reviewed-by: Lyude Paul
>
> Will handle pushing it to drm-misc in just a moment
Did you just take this one patch?
Who will take the rest of the series for DRM?
Jason
On Mon, Aug 12, 2024 at 03:02:01PM +0800, Lu Baolu wrote:
> From: Robin Murphy
>
> All users of ARM IOMMU mappings create them for a particular device, so
> change the interface to accept the device rather than forcing a vague
> indirection through a bus type. This prepares for making a similar
>
On Wed, Jul 17, 2024 at 10:51:03AM +, Omer Shpigelman wrote:
> The only place we have an ops structure is in the device driver,
> similarly to Jason's example. In our code it is struct
> hbl_aux_dev. What
No, hbl_aux_dev is an 'struct auxiliary_device', not a 'struct
device_driver', it is dif
On Sun, Jul 14, 2024 at 10:18:12AM +, Omer Shpigelman wrote:
> On 7/12/24 16:08, Jason Gunthorpe wrote:
> > [You don't often get email from j...@ziepe.ca. Learn why this is important
> > at https://aka.ms/LearnAboutSenderIdentification ]
> >
> > On Fri, Jun 28
On Fri, Jun 28, 2024 at 10:24:32AM +, Omer Shpigelman wrote:
> We need the core driver to access the IB driver (and to the ETH driver as
> well). As you wrote, we can't use exported symbols from our IB driver nor
> rely on function pointers, but what about providing the core driver an ops
> st
On Thu, Jul 04, 2024 at 03:18:56PM +0100, Will Deacon wrote:
> On Mon, 10 Jun 2024 16:55:34 +0800, Lu Baolu wrote:
> > The IOMMU subsystem has undergone some changes, including the removal
> > of iommu_ops from the bus structure. Consequently, the existing domain
> > allocation interface, which rel
On Mon, Jun 17, 2024 at 04:13:41PM +0100, Robin Murphy wrote:
> On 23/05/2024 6:52 pm, Rob Clark wrote:
> > From: Rob Clark
> >
> > Add an io-pgtable method to walk the pgtable returning the raw PTEs that
> > would be traversed for a given iova access.
>
> Have to say I'm a little torn here - wi
g in the way of that.
>
> Signed-off-by: Lu Baolu
> ---
> include/linux/iommu.h | 6 --
> drivers/iommu/iommu.c | 36
> 2 files changed, 42 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
> 2 files changed, 31 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
mmu_paging_domain_alloc() to retire the former.
>
> Signed-off-by: Lu Baolu
> ---
> drivers/gpu/drm/rockchip/rockchip_drm_drv.c | 10 +++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
.c | 6 --
> 1 file changed, 4 insertions(+), 2 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
oid using
> iommu_domain_alloc().
>
> Signed-off-by: Lu Baolu
> ---
> drivers/iommu/intel/iommu.c | 87 +
> 1 file changed, 78 insertions(+), 9 deletions(-)
It seems Ok, but I have some small thoughts
Reviewed-by: Jason Gunthorpe
> diff --git a
fsl/qbman/qman_portal.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
--
> drivers/remoteproc/remoteproc_core.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
; Acked-by: Jeff Johnson
> ---
> drivers/net/wireless/ath/ath11k/ahb.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
son
> ---
> drivers/net/wireless/ath/ath10k/snoc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
On Mon, Jun 10, 2024 at 04:55:34PM +0800, Lu Baolu wrote:
> Lu Baolu (20):
> iommu: Add iommu_paging_domain_alloc() interface
> iommufd: Use iommu_paging_domain_alloc()
> vfio/type1: Use iommu_paging_domain_alloc()
> drm/msm: Use iommu_paging_domain_alloc()
> wifi: ath10k: Use iommu_pagin
edia/platform/qcom/venus/firmware.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
tegra-vde/iommu.c | 7 ---
> 1 file changed, 4 insertions(+), 3
Reviewed-by: Jason Gunthorpe
Jason
> 1 file changed, 4 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
On Mon, Jun 10, 2024 at 04:55:38PM +0800, Lu Baolu wrote:
> Replace iommu_domain_alloc() with iommu_paging_domain_alloc().
>
> Signed-off-by: Lu Baolu
> ---
> drivers/vhost/vdpa.c | 14 ++
> 1 file changed, 6 insertions(+), 8 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
On Mon, Jun 10, 2024 at 04:55:37PM +0800, Lu Baolu wrote:
> Replace iommu_domain_alloc() with iommu_paging_domain_alloc().
>
> Signed-off-by: Lu Baolu
> ---
> drivers/vfio/vfio_iommu_type1.c | 7 ---
> 1 file changed, 4 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
t; device pointer along the path.
>
> Signed-off-by: Lu Baolu
> ---
> drivers/iommu/iommufd/hw_pagetable.c | 7 ---
> 1 file changed, 4 insertions(+), 3 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
igned-off-by: Lu Baolu
> ---
> include/linux/iommu.h | 6 ++
> drivers/iommu/iommu.c | 20 ++++
> 2 files changed, 26 insertions(+)
Reviewed-by: Jason Gunthorpe
Jason
On Thu, Jun 13, 2024 at 11:22:04AM +0300, Omer Shpigelman wrote:
> Add an RDMA driver of Gaudi ASICs family for AI scaling.
> The driver itself is agnostic to the ASIC in action, it operates according
> to the capabilities that were passed on device initialization.
> The device is initialized by th
On Tue, Jun 11, 2024 at 11:09:15AM -0700, Mina Almasry wrote:
> Just curious: in Pavel's effort, io_uring - which is not a device - is
> trying to share memory with the page_pool, which is also not a device.
> And Pavel is being asked to wrap the memory in a dmabuf. Is dmabuf
> going to be the ker
On Mon, Jun 10, 2024 at 08:20:08PM +0100, Pavel Begunkov wrote:
> On 6/10/24 16:16, David Ahern wrote:
> > > There is no reason you shouldn't be able to use your fast io_uring
> > > completion and lifecycle flow with DMABUF backed memory. Those are not
> > > widly different things and there is goo
On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
> On 6/10/24 01:37, David Wei wrote:
> > On 2024-06-07 17:52, Jason Gunthorpe wrote:
> > > IMHO it seems to compose poorly if you can only use the io_uring
> > > lifecycle model with io_uring registered
On Fri, Jun 07, 2024 at 08:27:29AM -0600, David Ahern wrote:
> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
> > I haven't seen any arguments against from the (net) maintainers so
> > far. Nor I see any objection against callbacks from them (considering
> > that either option adds an if).
>
> I have sa
On Wed, Jun 05, 2024 at 10:17:07AM +0800, Baolu Lu wrote:
> On 6/5/24 12:51 AM, Jason Gunthorpe wrote:
> > On Tue, Jun 04, 2024 at 09:51:14AM +0800, Lu Baolu wrote:
> > > Replace iommu_domain_alloc() with iommu_user_domain_alloc().
> > >
> > > Signed-off-by:
On Tue, Jun 04, 2024 at 09:51:14AM +0800, Lu Baolu wrote:
> Replace iommu_domain_alloc() with iommu_user_domain_alloc().
>
> Signed-off-by: Lu Baolu
> ---
> drivers/iommu/iommufd/hw_pagetable.c | 20 +---
> 1 file changed, 5 insertions(+), 15 deletions(-)
>
> diff --git a/driver
>
> Signed-off-by: Lu Baolu
> ---
> drivers/infiniband/hw/usnic/usnic_uiom.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
Acked-by: Jason Gunthorpe
Jason
On Tue, Jun 04, 2024 at 12:15:51PM -0400, Steven Rostedt wrote:
> On Tue, 04 Jun 2024 12:13:15 +0200
> Paolo Abeni wrote:
>
> > On Thu, 2024-05-30 at 20:16 +, Mina Almasry wrote:
> > > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > > index d82f92d7cf9ce..d5fac8edf621d 100644
> > > ---
On Wed, May 29, 2024 at 08:02:12PM +0800, Baolu Lu wrote:
> > > drivers/infiniband/hw/usnic/usnic_uiom.c: pd->domain = domain
> > > = iommu_domain_alloc(dev->bus);
> > >
> > > This series leave those cases unchanged and keep iommu_domain_alloc()
> > > for their usage. But new drivers should
On Wed, May 08, 2024 at 04:44:32PM +0100, Pavel Begunkov wrote:
> > like a weird and indirect way to get there. Why can't io_uring just be
> > the entity that does the final free and not mess with the logic
> > allocator?
>
> Then the user has to do a syscall (e.g. via io_uring) to return pages,
On Wed, May 08, 2024 at 12:30:07PM +0100, Pavel Begunkov wrote:
> > I'm not going to pretend to know about page pool details, but dmabuf
> > is the way to get the bulk of pages into a pool within the net stack's
> > allocator and keep that bulk properly refcounted while.> An object like
> > dmabuf
On Thu, May 02, 2024 at 07:50:36AM +, Kasireddy, Vivek wrote:
> Hi Jason,
>
> >
> > On Tue, Apr 30, 2024 at 04:24:50PM -0600, Alex Williamson wrote:
> > > > +static vm_fault_t vfio_pci_dma_buf_fault(struct vm_fault *vmf)
> > > > +{
> > > > + struct vm_area_struct *vma = vmf->vma;
> > >
On Tue, May 07, 2024 at 08:35:37PM +0100, Pavel Begunkov wrote:
> On 5/7/24 18:56, Jason Gunthorpe wrote:
> > On Tue, May 07, 2024 at 06:25:52PM +0100, Pavel Begunkov wrote:
> > > On 5/7/24 17:48, Jason Gunthorpe wrote:
> > > > On Tue, May 07, 2024 at 09:42:
On Tue, May 07, 2024 at 06:25:52PM +0100, Pavel Begunkov wrote:
> On 5/7/24 17:48, Jason Gunthorpe wrote:
> > On Tue, May 07, 2024 at 09:42:05AM -0700, Mina Almasry wrote:
> >
> > > 1. Align with devmem TCP to use udmabuf for your io_uring memory. I
> > > think
On Tue, May 07, 2024 at 09:42:05AM -0700, Mina Almasry wrote:
> 1. Align with devmem TCP to use udmabuf for your io_uring memory. I
> think in the past you said it's a uapi you don't link but in the face
> of this pushback you may want to reconsider.
dmabuf does not force a uapi, you can acquire
On Tue, May 07, 2024 at 05:05:12PM +0100, Pavel Begunkov wrote:
> > even in tree if you give them enough rope, and they should not have
> > that rope when the only sensible options are page/folio based kernel
> > memory (incuding large/huge folios) and dmabuf.
>
> I believe there is at least one d
On Mon, May 06, 2024 at 11:50:36PM +, Matthew Brost wrote:
> > I think like with the gpu vma stuff we should at least aim for the core
> > data structures, and more importantly, the locking design and how it
> > interacts with core mm services to be common code.
>
> I believe this is a reasona
On Fri, May 03, 2024 at 08:29:39PM +, Zeng, Oak wrote:
> > > But we have use case where we want to fault-in pages other than the
> > > page which contains the GPU fault address, e.g., user malloc'ed or
> > > mmap'ed 8MiB buffer, and no CPU touching of this buffer before GPU
> > > access it. Le
On Fri, May 03, 2024 at 02:43:19PM +, Zeng, Oak wrote:
> > > 2.
> > > Then call hmm_range_fault a second time
> > > Setting the hmm_range start/end only to cover valid pfns
> > > With all valid pfns, set the REQ_FAULT flag
> >
> > Why would you do this? The first already did the faults you nee
On Thu, May 02, 2024 at 07:25:50PM +, Zeng, Oak wrote:
> Hi Jason,
>
> I tried to understand how you supposed us to use hmm range fault... it seems
> you want us to call hmm range fault two times on each gpu page fault:
> 1.
> Call Hmm_range_fault first time, pfn of the fault address is set
On Fri, May 03, 2024 at 01:18:35PM +0300, Ilpo Järvinen wrote:
> On Thu, 15 Feb 2024, Ilpo Järvinen wrote:
>
> > Convert open coded RMW accesses for LNKCTL2 to use
> > pcie_capability_clear_and_set_word() which makes its easier to
> > understand what the code tries to do.
> >
> > LNKCTL2 is not r
On Thu, May 02, 2024 at 11:11:04AM +0200, Thomas Hellström wrote:
> It's true the cpu vma lookup is a remnant from amdkfd. The idea here is
> to replace that with fixed prefaulting ranges of tunable size. So far,
> as you mention, the prefaulting range has been determined by the CPU
> vma size. Gi
On Tue, Apr 30, 2024 at 04:24:50PM -0600, Alex Williamson wrote:
> > +static vm_fault_t vfio_pci_dma_buf_fault(struct vm_fault *vmf)
> > +{
> > + struct vm_area_struct *vma = vmf->vma;
> > + struct vfio_pci_dma_buf *priv = vma->vm_private_data;
> > + pgoff_t pgoff = vmf->pgoff;
> > +
> > +
On Tue, Apr 30, 2024 at 08:57:48PM +0200, Daniel Vetter wrote:
> On Tue, Apr 30, 2024 at 02:30:02PM -0300, Jason Gunthorpe wrote:
> > On Mon, Apr 29, 2024 at 10:25:48AM +0200, Thomas Hellström wrote:
> >
> > > > Yes there is another common scheme where you bind a
On Mon, Apr 29, 2024 at 10:25:48AM +0200, Thomas Hellström wrote:
> > Yes there is another common scheme where you bind a window of CPU to
> > a
> > window on the device and mirror a fixed range, but this is a quite
> > different thing. It is not SVA, it has a fixed range, and it is
> > probably b
On Fri, Apr 26, 2024 at 04:49:26PM +0200, Thomas Hellström wrote:
> On Fri, 2024-04-26 at 09:00 -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 26, 2024 at 11:55:05AM +0200, Thomas Hellström wrote:
> > > First, the gpu_vma structure is something that partitions the
> > >
On Fri, Apr 26, 2024 at 11:55:05AM +0200, Thomas Hellström wrote:
> First, the gpu_vma structure is something that partitions the gpu_vm
> that holds gpu-related range metadata, like what to mirror, desired gpu
> caching policies etc. These are managed (created, removed and split)
> mainly from use
On Wed, Apr 24, 2024 at 11:59:18PM +, Zeng, Oak wrote:
> Hi Jason,
>
> I went through the conversation b/t you and Matt. I think we are pretty much
> aligned. Here is what I get from this threads:
>
> 1) hmm range fault size, gpu page table map size : you prefer bigger
> gpu vma size and vma
On Wed, Apr 24, 2024 at 04:56:57PM +, Matthew Brost wrote:
> > What "meta data" is there for a SVA mapping? The entire page table is
> > an SVA.
>
> If we have allocated memory for GPU page tables in the range,
This is encoded directly in the radix tree.
> if range
> has been invalidated,
On Wed, Apr 24, 2024 at 04:35:17PM +, Matthew Brost wrote:
> On Wed, Apr 24, 2024 at 10:57:54AM -0300, Jason Gunthorpe wrote:
> > On Wed, Apr 24, 2024 at 02:31:36AM +, Matthew Brost wrote:
> >
> > > AMD seems to register notifiers on demand for parts of the ad
On Wed, Apr 24, 2024 at 02:31:36AM +, Matthew Brost wrote:
> AMD seems to register notifiers on demand for parts of the address space
> [1], I think Nvidia's open source driver does this too (can look this up
> if needed). We (Intel) also do this in Xe and the i915 for userptrs
> (explictly bi
On Tue, Apr 23, 2024 at 09:17:03PM +, Zeng, Oak wrote:
> > On Tue, Apr 09, 2024 at 04:45:22PM +, Zeng, Oak wrote:
> >
> > > > I saw, I am saying this should not be done. You cannot unmap bits of
> > > > a sgl mapping if an invalidation comes in.
> > >
> > > You are right, if we register a
On Tue, Apr 09, 2024 at 04:45:22PM +, Zeng, Oak wrote:
> > I saw, I am saying this should not be done. You cannot unmap bits of
> > a sgl mapping if an invalidation comes in.
>
> You are right, if we register a huge mmu interval notifier to cover
> the whole address space, then we should use
On Fri, Apr 05, 2024 at 04:42:14PM +, Zeng, Oak wrote:
> > > Above codes deal with a case where dma map is not needed. As I
> > > understand it, whether we need a dma map depends on the devices
> > > topology. For example, when device access host memory or another
> > > device's memory through
On Fri, Apr 05, 2024 at 03:33:10AM +, Zeng, Oak wrote:
> >
> > I didn't look at this series a lot but I wanted to make a few
> > remarks.. This I don't like quite a lot. Yes, the DMA API interaction
> > with hmm_range_fault is pretty bad, but it should not be hacked
> > around like this. Leon
On Wed, Jan 17, 2024 at 05:12:06PM -0500, Oak Zeng wrote:
> +/**
> + * xe_svm_build_sg() - build a scatter gather table for all the physical
> pages/pfn
> + * in a hmm_range.
> + *
> + * @range: the hmm range that we build the sg table from. range->hmm_pfns[]
> + * has the pfn numbers of pages tha
On Wed, Apr 03, 2024 at 04:06:11PM +0200, Christian König wrote:
[UGH html emails, try to avoid those they don't get archived!]
>The problem with that isn't the software but the hardware.
>At least on the AMD GPUs and Intels Xe accelerators we have seen so far
>page faults are not fas
On Wed, Apr 03, 2024 at 11:16:36AM +0200, Christian König wrote:
> Am 03.04.24 um 00:57 schrieb Dave Airlie:
> > On Wed, 27 Mar 2024 at 19:52, Thomas Hellström
> > wrote:
> > > Hi!
> > >
> > > With our SVM mirror work we'll soon start looking at HMM cross-device
> > > support. The identified need
On Fri, Feb 02, 2024 at 12:15:40PM -0400, Jason Gunthorpe wrote:
> > Yes looks like a race of some sort. Adding a bit of debug also makes the
> > issue go away so difficult to see what is happening.
>
> I'm wondering if it is racing with iommu driver probing? I loo
On Fri, Feb 02, 2024 at 03:56:52PM +, Jon Hunter wrote:
>
> On 02/02/2024 14:35, Jason Gunthorpe wrote:
> > On Fri, Feb 02, 2024 at 10:40:36AM +, Jon Hunter wrote:
> >
> > > > But, what is the actual log output you see, is it -EEXIST?
> > >
> &g
On Fri, Feb 02, 2024 at 10:40:36AM +, Jon Hunter wrote:
> > But, what is the actual log output you see, is it -EEXIST?
>
> I see ...
>
> ERR KERN host1x drm: iommu configuration for device failed with -ENOENT
So that shouldn't happen in you case as far as I can tell, the device
is properly
On Thu, Feb 01, 2024 at 07:35:24PM +, Jon Hunter wrote:
> > You mean this sequence?
> >
> > err = device_add(&ctx->dev);
> > if (err) {
> > dev_err(host1x->dev, "could not add context device %d:
> > %d\n", i, err);
> > put_device
On Tue, Jan 30, 2024 at 09:55:18PM +, Jon Hunter wrote:
>
> On 30/01/2024 16:15, Jason Gunthorpe wrote:
> > This was added in commit c95469aa5a18 ("gpu: host1x: Set DMA ops on device
> > creation") with the note:
> >
> > Currently host1x-instanc
uxrde7wyqeulm4xabmlm@b6jy32saugqh/
Reported-by: Jon Hunter
Closes:
https://lore.kernel.org/all/b0334c5e-3a6c-4b58-b525-e72bed889...@nvidia.com/
Signed-off-by: Jason Gunthorpe
---
drivers/gpu/host1x/bus.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/
1 - 100 of 1317 matches
Mail list logo