On 24.09.25 01:02, Matthew Brost wrote: >>>> What Simona agreed on is exactly what I proposed as well, that you >>>> get a private interface for exactly that use case. > > Do you have a link to the conversation with Simona? I'd lean towards a > kernel-wide generic interface if possible.
Oh, finding that exactly mail is tricky. IIRC she wrote something along the lines of "this should be done in a vfio/iommufd interface", but maybe my memory is a bit selective. We can of course still leverage the DMA-buf lifetime, synchronization and other functionalities. But this is so vfio specific that this is not going to fly as general DMA-buf interface I think. > Regarding phys_addr_t vs. dma_addr_t, I don't have a strong opinion. But > what about using an array of unsigned long with the order encoded > similarly to HMM PFNs? Drivers can interpret the address portion of the > data based on their individual use cases. That's basically what I had in mind for replacing the sg_table. > Also, to make this complete—do you think we'd need to teach TTM to > understand this new type of dma-buf, like we do for SG list dma-bufs? It > would seem a bit pointless if we just had to convert this new dma-buf > back into an SG list to pass it along to TTM. Using an sg_table / SG list in DMA-buf and TTM was a bad idea to begin with. At least for amdgpu we have switched over to just have that around temporary for most use cases. What we need is a container for efficient dma_addr_t storage (e.g. using low bits for the size/order of the area). Then iterators/cursors to go over that container. Switching between an sg_table and that new container is then just switching out the iterators. > The scope of this seems considerably larger than the original series. It > would be good for all stakeholders to reach an agreement so Vivek can > move forward. Yeah, agree. Regards, Christian. > > Matt > >>> >>> A "private" interface to exchange phys_addr_t between at least >>> VFIO/KVM/iommufd - sure no complaint with that. >>> >>> Jason >>
