Dave Airlie wrote: >> DRM_BO_HINT_DONT_FENCE is implied, and use that instead of the set pin >> interface. We can perhaps rename it to drmBOSetStatus or something more >> suitable. >> >> This will get rid of the user-space unfenced list access (which I >> believe was the main motivation behind the set pin interface?) while >> keeping the currently heavily used (at least in Poulsbo) functionality >> to move out NO_EVICT scanout buffers to local memory before unpinning >> them, (to avoid VRAM and TT fragmentation, as DRI clients may still >> reference those buffers, so they won't get destroyed before a new one is >> allocated). >> >> It would also allow us to specify where we want to pin buffers. If we >> remove the memory flag specification from drmBOCreate there's no other >> way to do that, except running the buffer through a superioctl which >> isn't very nice. >> >> Also it would make it much easier to unbreak i915 zone rendering and >> derived work. >> >> If we can agree on this, I'll come up with a patch. >> > > Have you had a chance to look at this I can probably spend some time on > this to get the interface finalised.. > > Dave. > Hi, Dave,
So, I did some quick work on this with the result in the drm-ttm-finalize branch. Basically what's done is to revert the setPin interface, and replace drmBOValidate with drmBOSetStatus. drmBOSetStatus is a single buffer interface with the same functionality as previously drmBOValidate but with the exception that it implies DRM_BO_HINT_DONT_FENCE, NO_MOVE buffers have been blocked pending a proper implementation, and optional tiling info has been added. The OP linked interface is gone, but the arguments are kept in drm.h for now, for driver specific IOCTLS. Looking at the buffer object create interface, I think it's a better idea to remove the need for locking and specify the memory flags rather than to assume a buffer in system memory before first validation. Consider a driver that wants to put a texture buffer in VRAM. It does createbuffer, copies the texture in, and then it gets validated as part of the superioctl. If we don't specify the VRAM flag at buffer creation, DRM will go ahead and create a ttm, allocate a lot of pages and then at validate do a copy and free all pages again. If we specify the VRAM flag, the buffer will on creation just reserve a vram area and point any mapped pages to it. Buffer creation and destruction in fixed memory areas was quite fast previously, since no cache flushes were needed, and I'd like to keep it that way. If we don't care or know how buffers are created, we just specify DRM_BO_FLAG_MEM_LOCAL, which will give the modified behavior currently present in master. But we need to remove the need for locking in the buffer manager. Locking was previosly needed a) to protect the ring buffer of i915 as the X server might use it. The i915 buffer driver used it for blit moves to and from "VRAM", but there are driver cases (eviction, pagefaults of non-mappable buffers) where we cannot assume that we have the lock. Better to disable i915 blit buffer moves until only DRM is allowed to touch the ring. It must be up to the driver code to ensure that the lock is not needed for any buffer move operations. b) To protect against memory region allocations / validations during take-down and vt-switches when certain memory regions are cleaned and all buffers are swapped out. Before this is merged, we need to disable i915 blit buffer moves and come up with an in-kernel fix for b). For b) we should be able to use something like a condition variable. Basically we want to interruptibly block validation if cleaned or on takedown. Also the unfenced list cleaning could probably be removed. It should be considered a bug to leave something on the unfenced list outside of the super-ioctl. /Thomas ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ -- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel