Re: [Linaro-mm-sig] [RFC 1/4] dma-buf: Add constraints sharing information
For me dmabuf and cenalloc offer two different features: one is buffer sharing (dmabuf) and one is buffer allocation (cenalloc). You may want to use dmabuf sharing feature whithout need of the buffer allocation feature, that is what for drm, v4l2, ION and other use dmabuf. In addition of dmabuf we need something aware of hardware devices constraints to allocate the buffer, that will be the role of cenalloc. Like ION, cenalloc will also provide a usespace API to allocate buffers but , unlike ION, the selection of the allocator will be based on devices constraints not on heap ID. In first step I think we can forget the bitmask approach and only use the current fields of devices structure like coherent_dma_mask, dma_parms-max_segment_size or dma_parms-segment_boundary_mask to find matching allocator. What I propose is to add in allocator ops a match(struct device*) function which will be call at attache time. In this way cenalloc could ask to each allocator if it is able to allocate buffer for all attached devices. Regards, Benjamin 2014-12-10 14:47 GMT+01:00 Daniel Vetter dan...@ffwll.ch: On Wed, Dec 10, 2014 at 07:01:16PM +0530, Sumit Semwal wrote: Hi Daniel, Thanks a bunch for your review comments! A few comments, post our discussion at LPC; On 12 October 2014 at 00:25, Daniel Vetter dan...@ffwll.ch wrote: On Sat, Oct 11, 2014 at 01:37:55AM +0530, Sumit Semwal wrote: At present, struct device lacks a mechanism of exposing memory access constraints for the device. Consequently, there is also no mechanism to share these constraints while sharing buffers using dma-buf. If we add support for sharing such constraints, we could use that to try to collect requirements of different buffer-sharing devices to allocate buffers from a pool that satisfies requirements of all such devices. This is an attempt to add this support; at the moment, only a bitmask is added, but if post discussion, we realise we need more information, we could always extend the definition of constraint. A new dma-buf op is also added, to allow exporters to interpret or decide on constraint-masks on their own. A default implementation is provided to just AND () all the constraint-masks. What constitutes a constraint-mask could be left for interpretation on a per-platform basis, while defining some common masks. Signed-off-by: Sumit Semwal sumit.sem...@linaro.org Cc: linux-ker...@vger.kernel.org Cc: Greg Kroah-Hartman gre...@linuxfoundation.org Cc: linux-media@vger.kernel.org Cc: dri-de...@lists.freedesktop.org Cc: linaro-mm-...@lists.linaro.org Just a few high-level comments, I'm between conference travel but hopefully I can discuss this a bit at plumbers next week. - I agree that for the insane specific cases we need something opaque like the access constraints mask you propose here. But for the normal case I think the existing dma constraints in dma_params would go a long way, and I think we should look at Rob's RFC from aeons ago to solve those: https://lkml.org/lkml/2012/7/19/285 With this we should be able to cover the allocation constraints of 90% of all cases hopefully. - I'm not sure whether an opaque bitmask is good enough really, I suspect that we also need various priorities between different allocators. With the option that some allocators are flat-out incompatible. Your/Rob's idea to figure out the constraints wrt max number of segments in the sg_list can provide, like you said, maybe 80-90% of the allocation constraints hopefully. The opaque mask should help for the remaining 'crazy' cases, so I'll be glad to merge Rob's and my approach on defining the constraints. I should think a little bit more about the priority idea that you propose here (and in another patch), but atm I am unable to see how that could help solve the finding-out-constraints problem. - The big bummer imo with ION is that it fully side-steps, but this proposal here also seems to add entirely new allocators. My rough idea This proposal does borrow this bit from ION, but once we have the required changes done in the dma api itself, the allocators can just become shims to the dma api allocators (eg dma_alloc_coherent etc) for cases where they can be used directly, while leaving provision for any crazy platform-specific allocators, without the userspace having to worry about it. was that at allocate/attach time we iterate over all attached devices like in Rob's patch and compute the most constrained allocation requirements. Then we pick the underlying dma api allocator for these constraints. That probably means that we need to open up the dma api a bit. But I guess for a start we could simply try to allocate from the most constrained device. Together with the opaque bits you propose here we could even map additional crazy requirements like that an allocation must come from a specific
Re: [Linaro-mm-sig] [RFC 1/4] dma-buf: Add constraints sharing information
On 10/11/2014 11:55 AM, Daniel Vetter wrote: On Sat, Oct 11, 2014 at 01:37:55AM +0530, Sumit Semwal wrote: At present, struct device lacks a mechanism of exposing memory access constraints for the device. Consequently, there is also no mechanism to share these constraints while sharing buffers using dma-buf. If we add support for sharing such constraints, we could use that to try to collect requirements of different buffer-sharing devices to allocate buffers from a pool that satisfies requirements of all such devices. This is an attempt to add this support; at the moment, only a bitmask is added, but if post discussion, we realise we need more information, we could always extend the definition of constraint. A new dma-buf op is also added, to allow exporters to interpret or decide on constraint-masks on their own. A default implementation is provided to just AND () all the constraint-masks. What constitutes a constraint-mask could be left for interpretation on a per-platform basis, while defining some common masks. Signed-off-by: Sumit Semwal sumit.sem...@linaro.org Cc: linux-ker...@vger.kernel.org Cc: Greg Kroah-Hartman gre...@linuxfoundation.org Cc: linux-media@vger.kernel.org Cc: dri-de...@lists.freedesktop.org Cc: linaro-mm-...@lists.linaro.org Just a few high-level comments, I'm between conference travel but hopefully I can discuss this a bit at plumbers next week. - I agree that for the insane specific cases we need something opaque like the access constraints mask you propose here. But for the normal case I think the existing dma constraints in dma_params would go a long way, and I think we should look at Rob's RFC from aeons ago to solve those: https://lkml.org/lkml/2012/7/19/285 With this we should be able to cover the allocation constraints of 90% of all cases hopefully. - I'm not sure whether an opaque bitmask is good enough really, I suspect that we also need various priorities between different allocators. With the option that some allocators are flat-out incompatible. From my experience with Ion, the bitmask is okay if you have only a few types but as soon as there are multiple regions it gets complicated and when you start adding in priority via id it really gets unwieldy. Thanks, Laura -- Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Linaro-mm-sig] [RFC 1/4] dma-buf: Add constraints sharing information
On Mon, Oct 13, 2014 at 01:14:58AM -0700, Laura Abbott wrote: On 10/11/2014 11:55 AM, Daniel Vetter wrote: On Sat, Oct 11, 2014 at 01:37:55AM +0530, Sumit Semwal wrote: At present, struct device lacks a mechanism of exposing memory access constraints for the device. Consequently, there is also no mechanism to share these constraints while sharing buffers using dma-buf. If we add support for sharing such constraints, we could use that to try to collect requirements of different buffer-sharing devices to allocate buffers from a pool that satisfies requirements of all such devices. This is an attempt to add this support; at the moment, only a bitmask is added, but if post discussion, we realise we need more information, we could always extend the definition of constraint. A new dma-buf op is also added, to allow exporters to interpret or decide on constraint-masks on their own. A default implementation is provided to just AND () all the constraint-masks. What constitutes a constraint-mask could be left for interpretation on a per-platform basis, while defining some common masks. Signed-off-by: Sumit Semwal sumit.sem...@linaro.org Cc: linux-ker...@vger.kernel.org Cc: Greg Kroah-Hartman gre...@linuxfoundation.org Cc: linux-media@vger.kernel.org Cc: dri-de...@lists.freedesktop.org Cc: linaro-mm-...@lists.linaro.org Just a few high-level comments, I'm between conference travel but hopefully I can discuss this a bit at plumbers next week. - I agree that for the insane specific cases we need something opaque like the access constraints mask you propose here. But for the normal case I think the existing dma constraints in dma_params would go a long way, and I think we should look at Rob's RFC from aeons ago to solve those: https://lkml.org/lkml/2012/7/19/285 With this we should be able to cover the allocation constraints of 90% of all cases hopefully. - I'm not sure whether an opaque bitmask is good enough really, I suspect that we also need various priorities between different allocators. With the option that some allocators are flat-out incompatible. From my experience with Ion, the bitmask is okay if you have only a few types but as soon as there are multiple regions it gets complicated and when you start adding in priority via id it really gets unwieldy. My idea is to just have a priority field for all devices which have special requirements (i.e. beyond dma masks/alignement/segment limits). Same priority on different devices would indicate an incompatibility, but higher priority on a parent device would indicate a possible allocator. That way you could have a root allocator if there's a way to unify everything. Not sure though whether this will work, since on intel devices we just don't have these kinds of special constraints. Overall my idea is to reuse the existing dma allocation code in upstream linux as much as possible. So having a completely separate hirarchy of memory allocators for shared buffers should imo be avoided. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html