On 2025-09-29 at 17:06 +1000, Danilo Krummrich <[email protected]> wrote...
> On Mon Sep 29, 2025 at 2:19 AM CEST, Alistair Popple wrote:
> > On 2025-09-26 at 22:00 +1000, Danilo Krummrich <[email protected]> wrote...
> >> On Tue Sep 23, 2025 at 6:29 AM CEST, Alistair Popple wrote:
> >> > On 2025-09-23 at 12:16 +1000, John Hubbard <[email protected]> wrote...
> >> >> On 9/22/25 9:08 AM, Danilo Krummrich wrote:
> >> >> > On 9/22/25 1:30 PM, Alistair Popple wrote:
> >> >> >> +        // SAFETY: No DMA allocations have been made yet
> >> >> > 
> >> >> > It's not really about DMA allocations that have been made previously, 
> >> >> > there is
> >> >> > no unsafe behavior in that.
> >> >> > 
> >> >> > It's about the method must not be called concurrently with any DMA 
> >> >> > allocation or
> >> >> > mapping primitives.
> >> >> > 
> >> >> > Can you please adjust the comment correspondingly?
> >> >
> >> > Sure.
> >> >
> >> >> >> +        unsafe { 
> >> >> >> pdev.dma_set_mask_and_coherent(DmaMask::new::<47>())? };
> >> >> > 
> >> >> > As Boqun mentioned, we shouldn't have a magic number for this. I 
> >> >> > don't know if
> >> >> > it will change for future chips, but maybe we should move this to 
> >> >> > gpu::Spec to
> >> >> 
> >> >> It changes to 52 bits for GH100+ (Hopper/Blackwell+). When I post those
> >> >> patches, I'll use a HAL to select the value.
> >> >> 
> >> >> > be safe.
> >> >> > 
> >> >> > At least, create a constant for it (also in gpu::Spec?); in Nouveau I 
> >> >> > named this
> >> >> > NOUVEAU_VA_SPACE_BITS back then. Not a great name, if you have a 
> >> >> > better idea,
> >> >> > please go for it. :)
> >> >
> >> > Well it's certainly not the VA_SPACE width ... that's a different 
> >> > address space :-)
> >> 
> >> I mean, sure. But isn't the limitation of 47 bits coming from the MMU and 
> >> hence
> >> defines the VA space width and the DMA bit width we can support?
> >
> > Not at all. The 47 bit limitation comes from what the DMA engines can 
> > physically
> > address, whilst the MMU converts virtual addresses to physical DMA 
> > addresses.
> 
> I'm well aware -- what I'm saying is that the number given to
> dma_set_mask_and_coherent() does not necessarily only depend on the physical 
> bus
> and DMA controller capabilities.
> 
> It may also depend on the MMU, since we still need to be able to map DMA 
> memory
> in the GPU's virtual address space.

Sure, I'm probably being a bit loose with terminology - I'm not saying it
doesn't depend on the MMU capabilities just that the physical addressing limits
are independent of the virtual addressing limits so setting the DMA mask based
on VA_SPACE_BITS (ie. virtual addressing limits) seems incorrect.

> > So the two address spaces are different and can have different widths. 
> > Indeed
> > most of our current GPUs have a virtual address space of 49 bits whilst only
> > supporting 47 bits of DMA address space.
> 
> Now, it seems that in this case the DMA engine is the actual limiting factor,
> but is this the case for all architectures or may we have cases where the MMU
> (or something else) becomes the limiting factor, e.g. in future architectures?

Hmm. I'm not sure I follow - the virtual addressing capabilities of the GPU MMU
are entirely indepedent of the DMA addressing capabilities of the GPU and bus.
For example you can still use 49-bit GPU virtual addresses with 47-bits of DMA
bits or less and vice-versa.

So the DMA address mask has nothing to do with the virtual address (ie.
VA_SPACE) width AFAICT? But maybe we've got slightly different terminology?

Reply via email to