Jerome Glisse wrote: > On Fri, 16 May 2008 09:28:31 +0200 > Thomas Hellström <[EMAIL PROTECTED]> wrote: > > >> Hi. >> >> I have a couple of question w r t the TTM vs GEM discussion: >> >> 1) How does pwrite() avoid clflush()es or wbinvd()s in the i915 gem case? >> >> 2) Some people have stated that GPU page faults could not be implemented >> with TTM. >> We've certainly dealt with that type of hardware, but found no obvious >> reason to use that feature. >> >> Could someone tell me why this feature can't be used with TTM (or is it >> that it can't be used with the current TTM driver interface?) and also >> a typical use-case where it might be beneficial within either the GEM or >> TTM context? >> >> /Thomas >> >> > > For GPU page my concern with ttm was the obligation of having a user > mapping of the VRAM. In GEM i am not forced to map vram (i could do > that too in TTM but i have the feeling that this would need a quite > big amount of code). Note that GEM can do VRAM mapping if you want > as stated by Keith and Eric, sorry if i did give the impression that > GEM could not. >
GEM's current implementation can't, and if you want to do it, my impression is that you can't use SHEMfs objects, and you need to re-invent the move-transparent mapping functionality of drmBOs, which is, IMHO, the #1 feature of TTM. Now if you want to map paging-capable VRAM, that's what the on-card VRAM aperture page table is for. You stated that it's a bad idea to use it. Why? TTM with it's built-in page table tricks should allow you to do this in a very neat fashion: 1) When you hit a CPU page-fault to an object in VRAM, Allocate aperture space for the drmBO pages (or perhaps only one page) in the aperture page-table. 1a) If there is no place in the aperture, kill the user-space mappings of another object already in the aperture using drm_bo_unmap_virtual(), and reuse that object's aperture space table entries. 2) Update the aperture page table to point to the object in VRAM being accessed. Flush the aperture TLB. 3) Set the CPU pfn to point to the correct place in the aperture. This would of course need an aperture space manager (drm_mm.c) and a lru list to do sane decisions about what objects to evict from the aperture, but note that you can evict _any_ object. You don't need to wait for any user to be done with it. If this sounds too complicated, it's still probably less than what you have to do to implement an efficient pwrite to VRAM, and also probably what the HW engineers intended. > Here are assumption i do on hw virtual address space, i don't have > information on this for radeon hw so this might not be what the > current hw is capable of. That being said, i have the feeling that > we will eventualy get to this technology. > > But what in TTM blocks you from using it? > My current understanding is that on newer GPU each client got its > own memory address space on the GPU. I can manage this space > transparently based on hint from userspace, ie i can place page > either in ram or vram and i can do migration when necessary. As > a result i think i have no obligation that page in VRAM to be > adjacent each to the other. Of course mapping such thing in > userspace vma become cumberstone. > No. It's easy, but requires some work (see above) and you can do it with page-size granularity too, but probably not as efficiently. > This view of vram management is what is motivating my reluctance > to map vram in userspace vma. I do agree that on current hw mapping > vram is the easiest path but once such things popup in hw, pread, > pwrite will be a better approach to update & read. > Could you describe what steps needs to be taken by the driver when the 2D driver wants to write a single pixel to the VRAM front buffer, with the pwrite approach, to ensure that the pixel ends up in the right location? > Cheers, > Jerome Glisse > BRs, Thomas ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel