Re: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function
On Tue, Apr 18, 2017 at 09:42:20AM -0600, Logan Gunthorpe wrote: > > > On 18/04/17 08:27 AM, Konrad Rzeszutek Wilk wrote: > > Interesting that you didn't CC any of the maintainers. Could you > > do that in the future please? > > Please read the cover letter. The distribution list for the patchset > would have been way too large to cc every maintainer (even as limited as > it was, I had mailing lists yelling at me). My plan was to get buy in I am not sure if you know, but you can add on each patch the respective maintainer via 'CC'. That way you can have certain maintainers CCed only on the subsystems they cover. You put it after (or before) your SoB and git send-email happilly picks it up. It does mean that for every patch you have to run something like this: $ more add_cc #!/bin/bash git diff HEAD^.. > /tmp/a echo "---" scripts/get_maintainer.pl --no-l /tmp/a | while read file do echo "Cc: $file" done Or such. > for the first patch, get it merged and resend the rest independently to > their respective maintainers. Of course, though, I'd be open to other > suggestions. > > >>> > >>> Signed-off-by: Logan Gunthorpe <log...@deltatee.com> > >>> --- > >>> drivers/block/xen-blkfront.c | 33 +++-- > >>> 1 file changed, 27 insertions(+), 6 deletions(-) > >>> > >>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > >>> index 5067a0a..7dcf41d 100644 > >>> --- a/drivers/block/xen-blkfront.c > >>> +++ b/drivers/block/xen-blkfront.c > >>> @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, > >>> struct blkfront_ring_info *ri > >>> BUG_ON(sg->offset + sg->length > PAGE_SIZE); > >>> > >>> if (setup.need_copy) { > >>> - setup.bvec_off = sg->offset; > >>> - setup.bvec_data = kmap_atomic(sg_page(sg)); > >>> + setup.bvec_off = 0; > >>> + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC); > >>> + if (IS_ERR(setup.bvec_data)) { > >>> + /* > >>> + * This should really never happen unless > >>> + * the code is changed to use memory that is > >>> + * not mappable in the sg. Seeing there is a > >>> + * questionable error path out of here, > >>> + * we WARN. > >>> + */ > >>> + WARN(1, "Non-mappable memory used in sg!"); > >>> + return 1; > >>> + } > >> ... > >> > >> Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?) > >> inside sg_map(). > > Thanks, that's a good suggestion. I'll make the change for v2. > > Logan
Re: [PATCH 16/22] xen-blkfront: Make use of the new sg_map helper function
On Tue, Apr 18, 2017 at 02:13:59PM +, David Laight wrote: > From: Logan Gunthorpe > > Sent: 13 April 2017 23:05 > > Straightforward conversion to the new helper, except due to > > the lack of error path, we have to warn if unmapable memory > > is ever present in the sgl. Interesting that you didn't CC any of the maintainers. Could you do that in the future please? > > > > Signed-off-by: Logan Gunthorpe> > --- > > drivers/block/xen-blkfront.c | 33 +++-- > > 1 file changed, 27 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > > index 5067a0a..7dcf41d 100644 > > --- a/drivers/block/xen-blkfront.c > > +++ b/drivers/block/xen-blkfront.c > > @@ -807,8 +807,19 @@ static int blkif_queue_rw_req(struct request *req, > > struct blkfront_ring_info *ri > > BUG_ON(sg->offset + sg->length > PAGE_SIZE); > > > > if (setup.need_copy) { > > - setup.bvec_off = sg->offset; > > - setup.bvec_data = kmap_atomic(sg_page(sg)); > > + setup.bvec_off = 0; > > + setup.bvec_data = sg_map(sg, SG_KMAP_ATOMIC); > > + if (IS_ERR(setup.bvec_data)) { > > + /* > > +* This should really never happen unless > > +* the code is changed to use memory that is > > +* not mappable in the sg. Seeing there is a > > +* questionable error path out of here, > > +* we WARN. > > +*/ > > + WARN(1, "Non-mappable memory used in sg!"); > > + return 1; > > + } > ... > > Perhaps add a flag to mark failure as 'unexpected' and trace (and panic?) > inside sg_map(). > > David > > > ___ > Linux-nvdimm mailing list > linux-nvd...@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm
Re: [PATCH 1/2] [RFC PATCH v6] dmabuf-sync: Add a buffer synchronization framework
+EXPORT_SYMBOL(is_dmabuf_sync_supported); _GPL ? I would also prefix it with 'dmabuf_is_sync_supported' just to make all of the libraries call start with 'dmabuf' Seems better. Will change it to dmabuf_is_sync_supported, and use EXPORT_SYMBOL_GPL. One thing thought - while I suggest that you use GPL variant I think you should check who the consumers are. As in, if nvidia wants to use it it might make their lawyers unhappy - and in turn means that their engineers won't be able to use these symbols. So - if there is a strong argument to not have it GPL - then please say so. -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] [RFC PATCH v6] dmabuf-sync: Add a buffer synchronization framework
On Tue, Aug 13, 2013 at 06:19:35PM +0900, Inki Dae wrote: This patch adds a buffer synchronization framework based on DMA BUF[1] and and based on ww-mutexes[2] for lock mechanism. The purpose of this framework is to provide not only buffer access control to CPU and DMA but also easy-to-use interfaces for device drivers and user application. This framework can be used for all dma devices using system memory as dma buffer, especially for most ARM based SoCs. Changelog v6: - Fix sync lock to multiple reads. - Add select system call support. . Wake up poll_wait when a dmabuf is unlocked. - Remove unnecessary the use of mutex lock. - Add private backend ops callbacks. . This ops has one callback for device drivers to clean up their sync object resource when the sync object is freed. For this, device drivers should implement the free callback properly. - Update document file. Changelog v5: - Rmove a dependence on reservation_object: the reservation_object is used to hook up to ttm and dma-buf for easy sharing of reservations across devices. However, the dmabuf sync can be used for all dma devices; v4l2 and drm based drivers, so doesn't need the reservation_object anymore. With regared to this, it adds 'void *sync' to dma_buf structure. - All patches are rebased on mainline, Linux v3.10. Changelog v4: - Add user side interface for buffer synchronization mechanism and update descriptions related to the user side interface. Changelog v3: - remove cache operation relevant codes and update document file. Changelog v2: - use atomic_add_unless to avoid potential bug. - add a macro for checking valid access type. - code clean. The mechanism of this framework has the following steps, 1. Register dmabufs to a sync object - A task gets a new sync object and can add one or more dmabufs that the task wants to access. This registering should be performed when a device context or an event context such as a page flip event is created or before CPU accesses a shared buffer. dma_buf_sync_get(a sync object, a dmabuf); 2. Lock a sync object - A task tries to lock all dmabufs added in its own sync object. Basically, the lock mechanism uses ww-mutex[1] to avoid dead lock issue and for race condition between CPU and CPU, CPU and DMA, and DMA and DMA. Taking a lock means that others cannot access all locked dmabufs until the task that locked the corresponding dmabufs, unlocks all the locked dmabufs. This locking should be performed before DMA or CPU accesses these dmabufs. dma_buf_sync_lock(a sync object); 3. Unlock a sync object - The task unlocks all dmabufs added in its own sync object. The unlock means that the DMA or CPU accesses to the dmabufs have been completed so that others may access them. This unlocking should be performed after DMA or CPU has completed accesses to the dmabufs. dma_buf_sync_unlock(a sync object); 4. Unregister one or all dmabufs from a sync object - A task unregisters the given dmabufs from the sync object. This means that the task dosen't want to lock the dmabufs. The unregistering should be performed after DMA or CPU has completed accesses to the dmabufs or when dma_buf_sync_lock() is failed. dma_buf_sync_put(a sync object, a dmabuf); dma_buf_sync_put_all(a sync object); The described steps may be summarized as: get - lock - CPU or DMA access to a buffer/s - unlock - put This framework includes the following two features. 1. read (shared) and write (exclusive) locks - A task is required to declare the access type when the task tries to register a dmabuf; READ, WRITE, READ DMA, or WRITE DMA. The below is example codes, struct dmabuf_sync *sync; sync = dmabuf_sync_init(...); ... dmabuf_sync_get(sync, dmabuf, DMA_BUF_ACCESS_R); ... And the below can be used as access types: DMA_BUF_ACCESS_R - CPU will access a buffer for read. DMA_BUF_ACCESS_W - CPU will access a buffer for read or write. DMA_BUF_ACCESS_DMA_R - DMA will access a buffer for read DMA_BUF_ACCESS_DMA_W - DMA will access a buffer for read or write. 2. Mandatory resource releasing - a task cannot hold a lock indefinitely. A task may never try to unlock a buffer after taking a lock to the buffer. In this case, a timer handler to the corresponding sync object is called in five (default) seconds and then the timed-out buffer is unlocked by work queue handler to avoid lockups and to enforce resources of the buffer. The below is how to use interfaces for device driver: 1. Allocate and Initialize a sync object: static void xxx_dmabuf_sync_free(void *priv) {
Re: [RFC v3 1/2] dma-buf: Introduce dma buffer sharing mechanism
On Fri, Dec 23, 2011 at 03:22:35PM +0530, Semwal, Sumit wrote: Hi Konrad, On Tue, Dec 20, 2011 at 10:06 PM, Konrad Rzeszutek Wilk konrad.w...@oracle.com wrote: On Mon, Dec 19, 2011 at 02:03:30PM +0530, Sumit Semwal wrote: This is the first step in defining a dma buffer sharing mechanism. A new buffer object dma_buf is added, with operations and API to allow easy sharing of this buffer object across devices. The framework allows: - different devices to 'attach' themselves to this buffer, to facilitate backing storage negotiation, using dma_buf_attach() API. Any thoughts of adding facility to track them? So you can see who is using what? Not for version 1, but it would be a useful addition once we start using this mechanism. OK. Looking forward to V2. - association of a file pointer with each user-buffer and associated allocator-defined operations on that buffer. This operation is called the 'export' operation. 'create'? or 'alloc' ? export implies an import somwhere and I don't think that is the case here. I will rephrase it as suggested by Rob as well. - this exported buffer-object to be shared with the other entity by asking for its 'file-descriptor (fd)', and sharing the fd across. - a received fd to get the buffer object back, where it can be accessed using the associated exporter-defined operations. - the exporter and user to share the scatterlist using map_dma_buf and unmap_dma_buf operations. Atleast one 'attach()' call is required to be made prior to calling the map_dma_buf() operation. for the whole memory region or just for the device itself? Rob has very eloquently and kindly explained it in his reply. Can you include his words of wisdom in the git description? snip +/* + * is_dma_buf_file - Check if struct file* is associated with dma_buf + */ +static inline int is_dma_buf_file(struct file *file) +{ + return file-f_op == dma_buf_fops; +} + +/** Wrong kerneldoc. I looked into scripts/kernel-doc, and Documentation/kernel-doc-na-HOWTO.txt = both these places mention that the kernel-doc comments have to start with /**, and I couldn't spot an error in what's wrong with my usage - would you please elaborate on what you think is not right? The issue I had was with '/**' but let me double-check where I learned that /** was a bad. Either way, it is a style-guide thing and the Documentation/* trumps what I recall. snip +/** + * struct dma_buf_attachment - holds device-buffer attachment data OK, but what is the purpose of it? I will add that in the comments. + * @dmabuf: buffer for this attachment. + * @dev: device attached to the buffer. ^^^ this + * @node: list_head to allow manipulation of list of dma_buf_attachment. Just say: list of dma_buf_attachment' ok. + * @priv: exporter-specific attachment data. That exporter-specific.. brings to my mind custom decleration forms. But maybe that is me. :) well, in context of dma-buf 'exporter', it makes sense. Or just private contents of the backend driver. But the naming is not that important to inhibit this patch from being merged. + */ +struct dma_buf_attachment { + struct dma_buf *dmabuf; + struct device *dev; + struct list_head node; + void *priv; +}; Why don't you move the decleration of this below 'struct dma_buf'? It would easier than to read this structure.. I could do that, but then anyways I will have to do a forward-declaration of dma_buf_attachment, since I have to use it in dma_buf_ops. If it improves readability, I am happy to move it below struct dma_buf It is more of just making the readability easier. As in reading from top bottom one. But if it is too ugly, don't bother. + +/** + * struct dma_buf_ops - operations possible on struct dma_buf + * @attach: allows different devices to 'attach' themselves to the given register? + * buffer. It might return -EBUSY to signal that backing storage + * is already allocated and incompatible with the requirements Wait.. allocated or attached? This and above comment on 'register' are already answered by Rob in his explanation of the sequence in earlier reply. [the Documentation patch [2/2] also tries to explain it] OK. Might want to mention the user to look in the Documentation, in case you don't already have it. + * of requesting device. [optional] What is optional? The return value? Or the 'attach' call? If the later , say that in the first paragraph. ok, sure. it is meant for the attach op. + * @detach: detach a given device from this buffer. [optional] + * @map_dma_buf: returns list of scatter pages allocated, increases usecount + * of the buffer. Requires atleast one attach to be called + * before. Returned sg list should already be mapped
Re: [RFC v3 1/2] dma-buf: Introduce dma buffer sharing mechanism
On Mon, Dec 19, 2011 at 02:03:30PM +0530, Sumit Semwal wrote: This is the first step in defining a dma buffer sharing mechanism. A new buffer object dma_buf is added, with operations and API to allow easy sharing of this buffer object across devices. The framework allows: - different devices to 'attach' themselves to this buffer, to facilitate backing storage negotiation, using dma_buf_attach() API. Any thoughts of adding facility to track them? So you can see who is using what? - association of a file pointer with each user-buffer and associated allocator-defined operations on that buffer. This operation is called the 'export' operation. 'create'? or 'alloc' ? export implies an import somwhere and I don't think that is the case here. - this exported buffer-object to be shared with the other entity by asking for its 'file-descriptor (fd)', and sharing the fd across. - a received fd to get the buffer object back, where it can be accessed using the associated exporter-defined operations. - the exporter and user to share the scatterlist using map_dma_buf and unmap_dma_buf operations. Atleast one 'attach()' call is required to be made prior to calling the map_dma_buf() operation. for the whole memory region or just for the device itself? Couple of building blocks in map_dma_buf() are added to ease introduction of sync'ing across exporter and users, and late allocation by the exporter. More details are there in the documentation patch. This is based on design suggestions from many people at the mini-summits[1], most notably from Arnd Bergmann a...@arndb.de, Rob Clark r...@ti.com and Daniel Vetter dan...@ffwll.ch. The implementation is inspired from proof-of-concept patch-set from Tomasz Stanislawski t.stanisl...@samsung.com, who demonstrated buffer sharing between two v4l2 devices. [2] [1]: https://wiki.linaro.org/OfficeofCTO/MemoryManagement [2]: http://lwn.net/Articles/454389 Signed-off-by: Sumit Semwal sumit.sem...@linaro.org Signed-off-by: Sumit Semwal sumit.sem...@ti.com --- drivers/base/Kconfig| 10 ++ drivers/base/Makefile |1 + drivers/base/dma-buf.c | 289 +++ include/linux/dma-buf.h | 172 4 files changed, 472 insertions(+), 0 deletions(-) create mode 100644 drivers/base/dma-buf.c create mode 100644 include/linux/dma-buf.h diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig index 21cf46f..8a0e87f 100644 --- a/drivers/base/Kconfig +++ b/drivers/base/Kconfig @@ -174,4 +174,14 @@ config SYS_HYPERVISOR source drivers/base/regmap/Kconfig +config DMA_SHARED_BUFFER + bool Buffer framework to be shared between drivers + default n + select ANON_INODES + help + This option enables the framework for buffer-sharing between + multiple drivers. A buffer is associated with a file using driver + APIs extension; the file's descriptor can then be passed on to other + driver. + endmenu diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 99a375a..d0df046 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_DEVTMPFS)+= devtmpfs.o obj-y+= power/ obj-$(CONFIG_HAS_DMA)+= dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o obj-$(CONFIG_ISA)+= isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c new file mode 100644 index 000..e920709 --- /dev/null +++ b/drivers/base/dma-buf.c @@ -0,0 +1,289 @@ +/* + * Framework for buffer objects that can be shared across devices/subsystems. + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Author: Sumit Semwal sumit.sem...@ti.com + * + * Many thanks to linaro-mm-sig list, and specially + * Arnd Bergmann a...@arndb.de, Rob Clark r...@ti.com and + * Daniel Vetter dan...@ffwll.ch for their support in creation and + * refining of this idea. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include linux/fs.h +#include linux/slab.h +#include linux/dma-buf.h +#include linux/anon_inodes.h +#include linux/export.h + +static inline int is_dma_buf_file(struct file *); + +static int
Re: [RFC v2 1/2] dma-buf: Introduce dma buffer sharing mechanism
On Fri, Dec 02, 2011 at 02:27:31PM +0530, Sumit Semwal wrote: This is the first step in defining a dma buffer sharing mechanism. A new buffer object dma_buf is added, with operations and API to allow easy sharing of this buffer object across devices. The framework allows: - different devices to 'attach' themselves to this buffer, to facilitate backing storage negotiation, using dma_buf_attach() API. - association of a file pointer with each user-buffer and associated allocator-defined operations on that buffer. This operation is called the 'export' operation. - this exported buffer-object to be shared with the other entity by asking for its 'file-descriptor (fd)', and sharing the fd across. - a received fd to get the buffer object back, where it can be accessed using the associated exporter-defined operations. - the exporter and user to share the scatterlist using map_dma_buf and unmap_dma_buf operations. Atleast one 'attach()' call is required to be made prior to calling the map_dma_buf() operation. Couple of building blocks in map_dma_buf() are added to ease introduction of sync'ing across exporter and users, and late allocation by the exporter. *OPTIONALLY*: mmap() file operation is provided for the associated 'fd', as wrapper over the optional allocator defined mmap(), to be used by devices that might need one. More details are there in the documentation patch. This is based on design suggestions from many people at the mini-summits[1], most notably from Arnd Bergmann a...@arndb.de, Rob Clark r...@ti.com and Daniel Vetter dan...@ffwll.ch. The implementation is inspired from proof-of-concept patch-set from Tomasz Stanislawski t.stanisl...@samsung.com, who demonstrated buffer sharing between two v4l2 devices. [2] [1]: https://wiki.linaro.org/OfficeofCTO/MemoryManagement [2]: http://lwn.net/Articles/454389 Signed-off-by: Sumit Semwal sumit.sem...@linaro.org Signed-off-by: Sumit Semwal sumit.sem...@ti.com You have a clone? You only need one SOB. --- drivers/base/Kconfig| 10 ++ drivers/base/Makefile |1 + drivers/base/dma-buf.c | 290 +++ include/linux/dma-buf.h | 176 4 files changed, 477 insertions(+), 0 deletions(-) create mode 100644 drivers/base/dma-buf.c create mode 100644 include/linux/dma-buf.h diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig index 21cf46f..07d8095 100644 --- a/drivers/base/Kconfig +++ b/drivers/base/Kconfig @@ -174,4 +174,14 @@ config SYS_HYPERVISOR source drivers/base/regmap/Kconfig +config DMA_SHARED_BUFFER + bool Buffer framework to be shared between drivers + default n + depends on ANON_INODES + help + This option enables the framework for buffer-sharing between + multiple drivers. A buffer is associated with a file using driver + APIs extension; the file's descriptor can then be passed on to other + driver. + endmenu diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 99a375a..d0df046 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_DEVTMPFS)+= devtmpfs.o obj-y+= power/ obj-$(CONFIG_HAS_DMA)+= dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o obj-$(CONFIG_ISA)+= isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c new file mode 100644 index 000..4b9005e --- /dev/null +++ b/drivers/base/dma-buf.c @@ -0,0 +1,290 @@ +/* + * Framework for buffer objects that can be shared across devices/subsystems. + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Author: Sumit Semwal sumit.sem...@ti.com OK, so the SOB should be from @ti.com then. + * + * Many thanks to linaro-mm-sig list, and specially + * Arnd Bergmann a...@arndb.de, Rob Clark r...@ti.com and + * Daniel Vetter dan...@ffwll.ch for their support in creation and + * refining of this idea. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see http://www.gnu.org/licenses/. + */ + +#include linux/fs.h +#include linux/slab.h +#include linux/dma-buf.h +#include linux/anon_inodes.h +#include linux/export.h + +static inline int is_dma_buf_file(struct
Re: [PATCH v2 -resend#1 1/1] V4L: videobuf, don't use dma addr as physical
On Mon, Feb 28, 2011 at 10:37:02AM +0100, Jiri Slaby wrote: mem-dma_handle is a dma address obtained by dma_alloc_coherent which needn't be a physical address in presence of IOMMU. So ensure we are Can you add a comment why you are fixing it? Is there a bug report for this? Under what conditions did you expose this fault? You also might want to mention that needn't be a physical address as a hardware IOMMU can (and most likely) will return a bus address where physical != bus address. Otherwise you can stick 'Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com' on it. remapping (remap_pfn_range) the right page in __videobuf_mmap_mapper by using virt_to_phys(mem-vaddr) and not mem-dma_handle. While at it, use PFN_DOWN instead of explicit shift. Signed-off-by: Jiri Slaby jsl...@suse.cz Cc: Mauro Carvalho Chehab mche...@infradead.org Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com --- drivers/media/video/videobuf-dma-contig.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/media/video/videobuf-dma-contig.c b/drivers/media/video/videobuf-dma-contig.c index c969111..19d3e4a 100644 --- a/drivers/media/video/videobuf-dma-contig.c +++ b/drivers/media/video/videobuf-dma-contig.c @@ -300,7 +300,7 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q, vma-vm_page_prot = pgprot_noncached(vma-vm_page_prot); retval = remap_pfn_range(vma, vma-vm_start, - mem-dma_handle PAGE_SHIFT, + PFN_DOWN(virt_to_phys(mem-vaddr)) size, vma-vm_page_prot); if (retval) { dev_err(q-dev, mmap: remap failed with error %d. , retval); -- 1.7.4.1 -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Memory sharing issue by application on V4L2 based device driver with system mmu.
.. snip.. But 64KB or 1MB physically contiguous memory is better than 4KB page in the point of performance. Could you explain that in more details please? I presume you are talking about a CPU that has a MMU unit, right? -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/1] V4L: videobuf, don't use dma addr as physical
On Mon, Dec 06, 2010 at 02:30:52PM +0100, Jiri Slaby wrote: mem-dma_handle is a dma address obtained by dma_alloc_coherent which needn't be a physical address in presence of IOMMU. So ensure we are Could you use PFN_DOWN(virt_to_phys(mem-vaddr)? remapping (remap_pfn_range) the right page in __videobuf_mmap_mapper by using virt_to_phys(mem-vaddr) and not mem-dma_handle. Signed-off-by: Jiri Slaby jsl...@suse.cz Cc: Mauro Carvalho Chehab mche...@infradead.org --- drivers/media/video/videobuf-dma-contig.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/media/video/videobuf-dma-contig.c b/drivers/media/video/videobuf-dma-contig.c index c969111..8d778aa 100644 --- a/drivers/media/video/videobuf-dma-contig.c +++ b/drivers/media/video/videobuf-dma-contig.c @@ -300,7 +300,7 @@ static int __videobuf_mmap_mapper(struct videobuf_queue *q, vma-vm_page_prot = pgprot_noncached(vma-vm_page_prot); retval = remap_pfn_range(vma, vma-vm_start, - mem-dma_handle PAGE_SHIFT, + virt_to_phys(mem-vaddr) PAGE_SHIFT, size, vma-vm_page_prot); if (retval) { dev_err(q-dev, mmap: remap failed with error %d. , retval); -- 1.7.3.1 ___ iommu mailing list io...@lists.linux-foundation.org https://lists.linux-foundation.org/mailman/listinfo/iommu -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFCv5 7/9] mm: vcm: Virtual Contiguous Memory framework added
+* The Virtual Contiguous Memory Manager + +The VCMM was built to solve the system-wide memory mapping issues that +occur when many bus-masters have IOMMUs. + +An IOMMU maps device addresses to physical addresses. It also +insulates the system from spurious or malicious device bus +transactions and allows fine-grained mapping attribute control. The +Linux kernel core does not contain a generic API to handle IOMMU +mapped memory; device driver writers must implement device specific +code to interoperate with the Linux kernel core. As the number of +IOMMUs increases, coordinating the many address spaces mapped by all +discrete IOMMUs becomes difficult without in-kernel support. Looking at the set of calls and the examples it struck me as similar to the agp.h API (drivers/char/agp/). It has allocate, bind, de-allocate. Naturally it has no bus device mapping, but the DRM code that utilizes the AGP API bridge has that: drivers/gpu/drm/ati_pcigart.c (DRM API). Then there are the radeon and nouveau drivers that program the GPU GART bypassing the AGP API but still utilize the DMA API. The nice ASCII art you included in your writeup looks to cover those use cases. What I am ineptly trying to say is that is that we have a bunch of APIs that do this, and in case where they are inadequate (or just look to be a one-off solution) we have functions that are similar in API view but differ in implementation (check out how the Nouveau programs its VMM compared to how the Radeon does it). Your API offers a way to unify all of that, but it looks to be an API on top of the other ones. You would still have to implement different mechanisms for the utilizing this say on the radeon driver: AGP API, or the home-grown GPU GART programming, and then the DMA API wrapped around them all. Oh, and the DMA API sits on top of the IOMMU API. I am not sure how this would solve the proliferation of different APIs - it sounds like it just adds another piece where you still have to shoe-in the other APIs in. But I do understand the problem you are facing. You want to switch to different IOMMUs for different drivers using only one API. Folks have been asking about whether it makes sense to include your algorithms in expanding the memory allocator to have huge chunks of memory reserved for specific drivers. But that does not solve the IOMMU problem. So my question is, would it perhaps make sense to concentrate on the DMA API? Could we expand it so that for specific devices it sets the DMA API to use a different IOMMU? If you look at the Calgary IOMMU - that is a perfect example of your problem - it is only used if the specific devices which fall within its control - all other DMA operations are utilized by the SWIOTLB (the default IOMMU). Would it possible to do something similar to that so when CMA is activated it scans the region list, finds which devices can share the same memory region and sets the struct device DMA API to point to a CMA IOMMU which is happy to utilize the memory allocator reserved chunks of memory (that would be the code lifted from your CMA and stuck in the memory allocate) and allowing different drivers (those on the whitelist) to share the same region? This has the extra benefit that it would inclusive allow all drivers that utilize the DMA API to use this without any extra VCMA/CMA API calls? P.S. I am quite X86 specific here - I don't know much about ARM (is there some desktop box I can buy with it?), so I am probably missing some details. -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH/RFCv4 2/6] mm: cma: Contiguous Memory Allocator added
On Fri, Aug 20, 2010 at 11:50:42AM +0200, Michal Nazarewicz wrote: The Contiguous Memory Allocator framework is a set of APIs for allocating physically contiguous chunks of memory. Various chips require contiguous blocks of memory to operate. Those chips include devices such as cameras, hardware video decoders and encoders, etc. I am not that familiar with how StrongARM works, and I took a bit look at the arch/arm/mach-s* and then some of the drivers/video/media/video/cx88 to get an idea how the hardware video decoders would work this. What I got from this patch review is that you are writting an IOMMU that is on steroids. It essentially knows that this device and that device can both share the same region, and it has fancy plugin system to deal with fragmentation and offers an simple API for other to write their own allocators. Even better, during init, the sub-platform can use cma_early_regions_reserve(func) to register their own function for reserving large regions of memory. Which from my experience (with Xen) means that there is a mechanism in place to have it setup contingous regions using sub-platform code. This is how I think it works, but I am not sure if I got it right. From looking at 'cma_alloc' and 'cma_alloc_from_region' - both return an dma_addr_t, which is what is usually feed in the DMA API. And looking at the cx88 driver I see it using that API.. I do understand that under ARM platform you might not have a need for DMA at all, and you use the 'dma_addr_t' just as handles, but for other platforms this would be used. So here is the bit where I am confused. Why not have this as Software IOMMU that would utilize the IOMMU API? There would be some technical questions to be answered (such as, what to do when you have another IOMMU and can you stack them on top of each other). A light review below: .. +*** Allocator operations + +Creating an allocator for CMA needs four functions to be +implemented. + + +The first two are used to initialise an allocator far given driver ^^- for +and clean up afterwards: + +int cma_foo_init(struct cma_region *reg); +void cma_foo_cleanup(struct cma_region *reg); + +The first is called when allocater is attached to region. The +cma_region structure has saved starting address of the region as Who saved the starting address? Is that the job of the cma_foo_init? +well as its size. Any data that allocate associated with the +region can be saved in private_data field. .. +The name (foo) will be available to use with command line +argument. No command line arguments. + +*** Integration with platform + +There is one function that needs to be called form platform +initialisation code. That is the cma_early_regions_reserve() +function: + +void cma_early_regions_reserve(int (*reserve)(struct cma_region *reg)); + +It traverses list of all of the regions given on command line and Ditto. +reserves memory for them. The only argument is a callback +function used to reserve the region. Passing NULL as the argument +makes the function use cma_early_region_reserve() function which +uses bootmem and memblock for allocating. + +Alternatively, platform code could traverse the cma_early_regions +list by itself but this should not be necessary. + .. +/** + * cma_alloc - allocates contiguous chunk of memory. + * @dev: The device to perform allocation for. + * @type:A type of memory to allocate. Platform may define + * several different types of memory and device drivers + * can then request chunks of different types. Usually it's + * safe to pass NULL here which is the same as passing + * common. + * @size:Size of the memory to allocate in bytes. + * @alignment: Desired alignment in bytes. Must be a power of two or + * zero. If alignment is less then a page size it will be + * set to page size. If unsure, pass zero here. + * + * On error returns a negative error cast to dma_addr_t. Use + * IS_ERR_VALUE() to check if returned value is indeed an error. + * Otherwise physical address of the chunk is returned. Should be 'bus address'. On some platforms the physical != PCI address. .. +/** Lower lever API */ + +/** + * cma_alloc_from - allocates contiguous chunk of memory from named regions. + * @regions: Comma separated list of region names. Terminated by NUL I think you mean 'NULL' + * byte or a semicolon. Uh, really? Why? Why not just simplify your life and make it \0? .. +/** Allocators API **/ + +/** + * struct cma_chunk - an allocated contiguous chunk of memory. + * @start: Physical address in bytes. + * @size:Size
Re: [PATCH/RFCv4 3/6] mm: cma: Added SysFS support
On Fri, Aug 20, 2010 at 11:50:43AM +0200, Michal Nazarewicz wrote: The SysFS development interface lets one change the map attribute at run time as well as observe what regions have been reserved. Signed-off-by: Michal Nazarewicz m.nazarew...@samsung.com Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com --- .../ABI/testing/sysfs-kernel-mm-contiguous | 53 +++ Documentation/contiguous-memory.txt|4 + include/linux/cma.h|7 + mm/Kconfig | 18 +- mm/cma.c | 345 +++- 5 files changed, 423 insertions(+), 4 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-kernel-mm-contiguous diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-contiguous b/Documentation/ABI/testing/sysfs-kernel-mm-contiguous new file mode 100644 index 000..8df15bc --- /dev/null +++ b/Documentation/ABI/testing/sysfs-kernel-mm-contiguous @@ -0,0 +1,53 @@ +What:/sys/kernel/mm/contiguous/ +Date:August 2010 +Contact: Michal Nazarewicz m.nazarew...@samsung.com +Description: + If CMA has been built with SysFS support, + /sys/kernel/mm/contiguous/ contains a file called + map, a file called allocators and a directory + called regions. + + The map file lets one change the CMA's map attribute + at run-time. + + The allocators file list all registered allocators. + Allocators with no name are listed as a single minus + sign. + + The regions directory list all reserved regions. + + For more details see + Documentation/contiguous-memory.txt. + +What:/sys/kernel/mm/contiguous/regions/ +Date:August 2010 +Contact: Michal Nazarewicz m.nazarew...@samsung.com +Description: + The /sys/kernel/mm/contiguous/regions/ directory + contain directories for each registered CMA region. + The name of the directory is the same as the start + address of the region. + + If region is named there is also a symbolic link named + like the region pointing to the region's directory. + + Such directory contains the following files: + + * name -- the name of the region or an empty file + * start -- starting address of the region (formatted + with %p, ie. hex). + * size -- size of the region (in bytes). + * free -- free space in the region (in bytes). + * users -- number of chunks allocated in the region. + * alloc -- name of the allocator. + + If allocator is not attached to the region, alloc is + either the name of desired allocator in square + brackets (ie. [foo]) or an empty file if region is + to be attached to default allocator. If an allocator + is attached to the region. alloc is either its name + or - if attached allocator has no name. + + If there are no chunks allocated in given region + (users is 0) then a name of desired allocator can + be written to alloc. diff --git a/Documentation/contiguous-memory.txt b/Documentation/contiguous-memory.txt index 8fc2400..8d189b8 100644 --- a/Documentation/contiguous-memory.txt +++ b/Documentation/contiguous-memory.txt @@ -256,6 +256,10 @@ iff it matched in previous pattern. If the second part is omitted it will mach any type of memory requested by device. + If SysFS support is enabled, this attribute is accessible via + SysFS and can be changed at run-time by writing to + /sys/kernel/mm/contiguous/map. + Some examples (whitespace added for better readability): cma_map = foo/quaz = r1; diff --git a/include/linux/cma.h b/include/linux/cma.h index cd63f52..eede28d 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -17,6 +17,9 @@ #include linux/rbtree.h #include linux/list.h +#if defined CONFIG_CMA_SYSFS +# include linux/kobject.h +#endif struct device; @@ -203,6 +206,10 @@ struct cma_region { unsigned users; struct list_head list; +#if defined CONFIG_CMA_SYSFS + struct kobject kobj; +#endif + unsigned used:1; unsigned registered:1; unsigned reserved:1; diff --git a/mm/Kconfig b/mm/Kconfig index 3e9317c..ac0bb08 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -319,12 +319,26 @@ config CMA To make use of CMA you need to specify the regions and driver-region mapping on command line when booting the kernel. -config CMA_DEBUG - bool CMA debug messages (DEVELOPEMENT) +config CMA_DEVELOPEMENT + bool
Re: [PATCH/RFCv3 0/6] The Contiguous Memory Allocator framework
On Wed, Aug 18, 2010 at 12:01:35PM +0900, Kyungmin Park wrote: Are there any comments or ack? Is there a git tree and/or link to the latest version that is based on top 2.6.36-rc1?? I somehow seem to have lost the v3 of these patches. -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html