Thanks,
I've pulled the series into the dma-mapping for-next tree now.
A NULL dev->dma_parms indicates either a bus that is not DMA capable or
grave bug in the implementation of the bus code.
There isn't much the driver can do in terms of error handling for either
case, so just warn and continue as DMA operations will fail anyway.
Signed-off-by: Christoph
A NULL dev->dma_parms indicates either a bus that is not DMA capable or
grave bug in the implementation of the bus code.
There isn't much the driver can do in terms of error handling for either
case, so just warn and continue as DMA operations will fail anyway.
Signed-off-by: Christoph
A NULL dev->dma_parms indicates either a bus that is not DMA capable or
grave bug in the implementation of the bus code.
There isn't much the driver can do in terms of error handling for either
case, so just warn and continue as DMA operations will fail anyway.
Signed-off-by: Christoph
We'll start throwing warnings soon when dma_set_seg_boundary and
dma_set_max_seg_size are called on devices for buses that don't fully
support the DMA API. Prepare for that by making the calls in the SCSI
midlayer conditional.
Signed-off-by: Christoph Hellwig
---
drivers/scsi/scsi_
Hi all,
the above three functions can only return errors if the bus code failed
to allocate the dma_parms structure, which is a grave error that won't
get us far. Thus remove the pointless return values, that so far have
fortunately been mostly ignored, but which the cleanup brigade now wants
to
On Wed, Jan 10, 2024 at 07:38:43AM -0800, Andrew Morton wrote:
> I assume that kernels which contain 137db333b29186 ("xfs: teach xfile
> to pass back direct-map pages to caller") want this, so a Fixes: that
> and a cc:stable are appropriate?
I think it needs to back all the way back to 3934e8ebb7c
On Wed, Jan 10, 2024 at 12:37:18PM +, Matthew Wilcox wrote:
> On Wed, Jan 10, 2024 at 10:21:07AM +0100, Christoph Hellwig wrote:
> > Hi all,
> >
> > Darrick reported that the fairly new XFS xfile code blows up when force
> > enabling large folio for shmem. This s
For now use this one liner to disable large folios.
Reported-by: Darrick J. Wong
Signed-off-by: Christoph Hellwig
---
fs/xfs/scrub/xfile.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/fs/xfs/scrub/xfile.c b/fs/xfs/scrub/xfile.c
index 090c3ead43fdf1..1a8d1bedd0b0dc 100644
--- a/fs/xfs/sc
Users of shmem_kernel_file_setup might not be able to deal with large
folios (yet). Give them a way to disable large folio support on their
mapping.
Signed-off-by: Christoph Hellwig
---
include/linux/pagemap.h | 14 ++
1 file changed, 14 insertions(+)
diff --git a/include/linux
Hi all,
Darrick reported that the fairly new XFS xfile code blows up when force
enabling large folio for shmem. This series fixes this quickly by disabling
large folios for this particular shmem file for now until it can be fixed
properly, which will be a lot more invasive.
I've added most of yo
Looks good:
Reviewed-by: Christoph Hellwig
On Mon, Aug 21, 2023 at 01:20:33PM +0100, Matthew Wilcox wrote:
> I was hoping Christoph would weigh in ;-) I don't have a strong
I've enjoyed 2 weeks of almost uninterrupted vacation.
I agree with this patch and also your incremental improvements.
On Thu, Jan 20, 2022 at 07:27:36AM -0800, Keith Busch wrote:
> It doesn't look like IOMMU page sizes are exported, or even necessarily
> consistently sized on at least one arch (power).
At the DMA API layer dma_get_merge_boundary is the API for it.
On Tue, Jan 11, 2022 at 12:17:18AM -0800, John Hubbard wrote:
> Zooming in on the pinning aspect for a moment: last time I attempted to
> convert O_DIRECT callers from gup to pup, I recall wanting very much to
> record, in each bio_vec, whether these pages were acquired via FOLL_PIN,
> or some non-
On Tue, Jan 11, 2022 at 04:26:48PM -0400, Jason Gunthorpe wrote:
> What I did in RDMA was make an iterator rdma_umem_for_each_dma_block()
>
> The driver passes in the page size it wants and the iterator breaks up
> the SGL into that size.
>
> So, eg on a 16k page size system the SGL would be full
On Wed, Jan 12, 2022 at 06:37:03PM +, Matthew Wilcox wrote:
> But let's go further than that (which only brings us to 32 bytes per
> range). For the systems you care about which use an identity mapping,
> and have sizeof(dma_addr_t) == sizeof(phys_addr_t), we can simply
> point the dma_range p
On Tue, Jan 11, 2022 at 11:01:42AM -0400, Jason Gunthorpe wrote:
> Then we are we using get_user_phyr() at all if we are just storing it
> in a sg?
I think we need to stop calling the output of the phyr dma map
helper a sg. Yes, a { dma_addr, len } tuple is scatter/gather I/O in its
purest form,
On Mon, Jan 10, 2022 at 08:41:26PM -0400, Jason Gunthorpe wrote:
> > Finally, it may be possible to stop using scatterlist to describe the
> > input to the DMA-mapping operation. We may be able to get struct
> > scatterlist down to just dma_address and dma_length, with chaining
> > handled through
On Mon, Jan 10, 2022 at 07:34:49PM +, Matthew Wilcox wrote:
> TLDR: I want to introduce a new data type:
>
> struct phyr {
> phys_addr_t addr;
> size_t len;
> };
>
> and use it to replace bio_vec as well as using it to replace the array
> of struct pages used by get_user_pages
On Wed, Sep 04, 2019 at 09:32:30AM +0200, Thomas Hellström (VMware) wrote:
> That sounds great. Is there anything I can do to help out? I thought this
> was more or less a dead end since the current dma_mmap_ API requires the
> mmap_sem to be held in write mode (modifying the vma->vm_flags) whereas
On Tue, Sep 03, 2019 at 04:32:45PM +0200, Thomas Hellström (VMware) wrote:
> Is this a layer violation concern, that is, would you be ok with a similar
> helper for TTM, or is it that you want to force the graphics drivers into
> adhering strictly to the DMA api, even when it from an engineering
>
On Tue, Sep 03, 2019 at 03:15:02PM +0200, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom
>
> The force_dma_unencrypted symbol is needed by TTM to set up the correct
> page protection when memory encryption is active. Export it.
Smae here. None of a drivers business. DMA decisions ar
On Tue, Sep 03, 2019 at 03:15:01PM +0200, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom
>
> The force_dma_unencrypted symbol is needed by TTM to set up the correct
> page protection when memory encryption is active. Export it.
NAK. This is a helper for the core DMA code and drivers
On Tue, Aug 20, 2019 at 12:13:59PM +0900, Sergey Senozhatsky wrote:
> Always put_filesystem() in i915_gemfs_init().
>
> Signed-off-by: Sergey Senozhatsky
> ---
> - v2: rebased (i915 does not remount gemfs anymore)
Which means it real doesn't need its mount anyore, and thus can use
plain old shm
On Thu, Aug 08, 2019 at 05:34:45PM +0200, Gerd Hoffmann wrote:
> We must make sure our scatterlist segments are not too big, otherwise
> we might see swiotlb failures (happens with sev, also reproducable with
> swiotlb=force).
Btw, any chance I could also draft you to replace the remaining
abuses
On Fri, Aug 09, 2019 at 01:00:38PM +0300, Tomi Valkeinen wrote:
> Alright, thanks for the clarification!
>
> Here's my version.
Looks god to me:
Reviewed-by: Christoph Hellwig
On Thu, Aug 08, 2019 at 09:44:32AM -0700, Rob Clark wrote:
> > GFP_HIGHUSER basically just means that this is an allocation that could
> > dip into highmem, in which case it would not have a kernel mapping.
> > This can happen on arm + LPAE, but not on arm64.
>
> Just a dumb question, but why is *
On Thu, Aug 08, 2019 at 01:58:08PM +0200, Daniel Vetter wrote:
> > > We use shmem to get at swappable pages. We generally just assume that
> > > the gpu can get at those pages, but things fall apart in fun ways:
> > > - some setups somehow inject bounce buffers. Some drivers just give
> > > up, oth
On Fri, Aug 09, 2019 at 09:40:32AM +0300, Tomi Valkeinen wrote:
> We do call dma_set_coherent_mask() in omapdrm's probe() (in omap_drv.c),
> but apparently that's not enough anymore. Changing that call to
> dma_coerce_mask_and_coherent() removes the WARN. I can create a patch for
> that, or Chri
: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
Reported-by: "H. Nikolaus Schaller"
Tested-by: "H. Nikolaus Schaller"
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/omapdrm/omap_fbdev.c | 2 ++
1 file changed, 2 insertions(+)
diff -
On Wed, Aug 07, 2019 at 09:09:53AM -0700, Rob Clark wrote:
> > > (Eventually I'd like to support pages passed in from userspace.. but
> > > that is down the road.)
> >
> > Eww. Please talk to the iommu list before starting on that.
>
> This is more of a long term goal, we can't do it until we hav
On Wed, Aug 07, 2019 at 10:48:56AM +0200, Daniel Vetter wrote:
> >other drm drivers how do they guarantee addressability without an
> >iommu?)
>
> We use shmem to get at swappable pages. We generally just assume that
> the gpu can get at those pages, but things fall apart in fun ways:
> -
On Wed, Aug 07, 2019 at 10:30:04AM -0700, Rob Clark wrote:
> So, we do end up using GFP_HIGHUSER, which appears to get passed thru
> when shmem gets to the point of actually allocating pages.. not sure
> if that just ends up being a hint, or if it guarantees that we don't
> get something in the lin
On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> I'm fairly confident that the linear/direct map cacheable alias is not
> torn down when pages are allocated. The gneeric page allocation code
> doesn't do so, and I see nothing the shmem code to do so.
It is not torn down anywhere.
>
On Wed, Aug 07, 2019 at 01:38:08PM +0100, Mark Rutland wrote:
> > I *believe* that there are not alias mappings (that I don't control
> > myself) for pages coming from
> > shmem_file_setup()/shmem_read_mapping_page()..
>
> AFAICT, that's regular anonymous memory, so there will be a cacheable
> a
On Tue, Aug 06, 2019 at 12:09:38PM -0700, Matthew Wilcox wrote:
> Has anyone looked at turning the interface inside-out? ie something like:
>
> struct mm_walk_state state = { .mm = mm, .start = start, .end = end, };
>
> for_each_page_range(&state, page) {
> ... do somet
On Tue, Aug 06, 2019 at 11:50:42AM -0700, Linus Torvalds wrote:
>
> In fact, I do note that a lot of the users don't actually use the
> "void *private" argument at all - they just want the walker - and just
> pass in a NULL private pointer. So we have things like this:
>
> > + if (walk_page
On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote:
> Though personally I'm averse to managing "f"objects through
> "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works
> on the virtual address of a mapping, but the huge-or-not alignment of
> that mapping must have been d
On Tue, Aug 06, 2019 at 09:23:51AM -0700, Rob Clark wrote:
> On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote:
> >
> > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > Agreed that drm_cflush_* isn't a great API. In this particular case
> &
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> Agreed that drm_cflush_* isn't a great API. In this particular case
> (IIUC), I need wb+inv so that there aren't dirty cache lines that drop
> out to memory later, and so that I don't get a cache hit on
> uncached/wc mmap'ing.
So what i
On Tue, Aug 06, 2019 at 11:38:16AM +0200, Daniel Vetter wrote:
> I just read through all the arch_sync_dma_for_device/cpu functions and
> none seem to use the struct *dev argument. Iirc you've said that's on the
> way out?
Not actively on the way out yet, but now that we support all
architectures
This goes in the wrong direction. drm_cflush_* are a bad API we need to
get rid of, not add use of it. The reason for that is two-fold:
a) it doesn't address how cache maintaince actually works in most
platforms. When talking about a cache we three fundamental operations:
1) write
[adding the real linux-mm list now]
On Tue, Aug 06, 2019 at 12:38:31AM -0700, Christoph Hellwig wrote:
> On Mon, Jul 15, 2019 at 03:17:42PM -0700, Linus Torvalds wrote:
> > The attached patch does add more lines than it removes, but in most
> > cases it's actually a clear imp
o the hmm model.
--
>From 67c1c6b56322bdd2937008e7fb79fb6f6e345dab Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Mon, 5 Aug 2019 11:10:44 +0300
Subject: pagewalk: clean up the API
The mm_walk structure currently mixed data and code. Split out the
operations vectors into a new mm_walk_ops structure, and while we
are chan
On Tue, Jul 30, 2019 at 10:50:32AM +0300, Tomi Valkeinen wrote:
> On 30/07/2019 09:18, Christoph Hellwig wrote:
>> We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version,
>> so remove the third way of doing things.
>>
>> Signed-off-by: Christoph Hellwig
We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version,
so remove the third way of doing things.
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/omapdrm/dss/dispc.c | 11 +--
include/linux/dma-mapping.h | 9 -
2 files changed, 5 insertions(+), 15
On Thu, Jul 25, 2019 at 09:47:11AM -0400, Andrew F. Davis wrote:
> This is a central allocator, it is not tied to any one device. If we
> knew the one device ahead of time we would just use the existing dma_alloc.
>
> We might be able to solve some of that with late mapping after all the
> devices
On Thu, Jul 25, 2019 at 09:31:50AM -0400, Andrew F. Davis wrote:
> But that's just it, dma-buf does not assume buffers are backed by normal
> kernel managed memory, it is up to the buffer exporter where and when to
> allocate the memory. The memory backed by this SRAM buffer does not have
> the nor
> +struct system_heap {
> + struct dma_heap *heap;
> +} sys_heap;
It seems like this structure could be removed and if would improve
the code flow.
> +static struct dma_heap_ops system_heap_ops = {
> + .allocate = system_heap_allocate,
> +};
> +
> +static int system_heap_create(void)
> +{
> +struct dma_buf *heap_helper_export_dmabuf(
> + struct heap_helper_buffer *helper_buffer,
> + int fd_flags)
Indentation seems odd here as it doesn't follow any of the usual schools
for multi-level prototypes. But maybe shortening some iden
On Wed, Jul 24, 2019 at 11:46:24AM -0700, John Stultz wrote:
> I'm still not understanding how this would work. Benjamin and Laura
> already commented on this point, but for a simple example, with the
> HiKey boards, the DRM driver requires contiguous memory for the
> framebuffer, but the GPU can h
On Wed, Jul 24, 2019 at 11:46:01AM -0400, Andrew F. Davis wrote:
> https://patchwork.kernel.org/patch/10863957/
>
> It's actually a more simple heap type IMHO, but the logic inside is
> incompatible with the system/CMA heaps, if you move any of their code
> into the core framework then this heap s
On Wed, Jul 24, 2019 at 07:38:07AM -0400, Laura Abbott wrote:
> It's not just an optimization for Ion though. Ion was designed to
> let the callers choose between system and multiple CMA heaps.
Who cares about ion? That some out of tree android crap that should
not be relevant for upstream except
On Wed, Jul 24, 2019 at 10:08:54AM +0200, Benjamin Gaignard wrote:
> CMA has made possible to get large regions of memories and to give some
> priority on device allocating pages on it. I don't think that possible
> with system
> heap so I suggest to keep CMA heap if we want to be able to port a ma
On Wed, Jul 24, 2019 at 11:20:31AM -0400, Andrew F. Davis wrote:
> Well then lets think on this. A given buffer can have 3 owners states
> (CPU-owned, Device-owned, and Un-owned). These are based on the caching
> state from the CPU perspective.
>
> If a buffer is CPU-owned then we (Linux) can writ
On Mon, Jul 22, 2019 at 10:04:06PM -0700, John Stultz wrote:
> Apologies, I'm not sure I'm understanding your suggestion here.
> dma_alloc_contiguous() does have some interesting optimizations
> (avoiding allocating single page from cma), though its focus on
> default area vs specific device area d
On Mon, Jul 22, 2019 at 09:09:25PM -0700, John Stultz wrote:
> On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig wrote:
> >
> > > +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> > > + void (*free)(struct heap_helper_buffer *))
On Tue, Jul 23, 2019 at 01:09:55PM -0700, Rob Clark wrote:
> On Mon, Jul 22, 2019 at 9:09 PM John Stultz wrote:
> >
> > On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig
> > wrote:
> > >
> > > Is there any exlusion between mmap / vmap and the device acce
On Mon, Jul 22, 2019 at 11:33:32PM -0700, John Hubbard wrote:
> I'm seeing about 18 places where set_page_dirty() is used, in the call site
> conversions so far, and about 20 places where set_page_dirty_lock() is
> used. So without knowing how many of the former (if any) represent bugs,
> you can s
On Mon, Jul 22, 2019 at 03:34:13PM -0700, john.hubb...@gmail.com wrote:
> +enum pup_flags_t {
> + PUP_FLAGS_CLEAN = 0,
> + PUP_FLAGS_DIRTY = 1,
> + PUP_FLAGS_LOCK = 2,
> + PUP_FLAGS_DIRTY_LOCK= 3,
> +};
Well, the enum defeats the ease of just being able
> diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
> index 83de74ca729a..9cbbb96c2a32 100644
> --- a/net/xdp/xdp_umem.c
> +++ b/net/xdp/xdp_umem.c
> @@ -171,8 +171,7 @@ static void xdp_umem_unpin_pages(struct xdp_umem *umem)
> for (i = 0; i < umem->npgs; i++) {
> struct page
On Sun, Jul 21, 2019 at 09:30:10PM -0700, john.hubb...@gmail.com wrote:
> for (i = 0; i < vsg->num_pages; ++i) {
> if (NULL != (page = vsg->pages[i])) {
> if (!PageReserved(page) && (DMA_FROM_DEVICE ==
> vsg->direction))
> -
This and the previous one seem very much duplicated boilerplate
code. Why can't just normal branches for allocating and freeing
normal pages vs cma? We even have an existing helper for that
with dma_alloc_contiguous().
> +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> + void (*free)(struct heap_helper_buffer *))
Please use a lower case naming following the naming scheme for the
rest of the file.
> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
>
On Tue, Jul 02, 2019 at 11:48:44AM +0200, Arend Van Spriel wrote:
> You made me look ;-) Actually not touching my drivers so I'm off the hook.
> However, I was wondering if drivers could know so I decided to look into
> the DMA-API.txt documentation which currently states:
>
> """
> The flag para
On Fri, Jun 14, 2019 at 03:47:10PM +0200, Christoph Hellwig wrote:
> Switching to a slightly cleaned up alloc_pages_exact is pretty easy,
> but it turns out that because we didn't filter valid gfp_t flags
> on the DMA allocator, a bunch of drivers were passing __GFP_COMP
> to it
Don't we have a device tree problem here if there is a domain covering
them? I though we should only pick up an IOMMU for a given device
if DT explicitly asked for that?
On Wed, Jun 19, 2019 at 01:29:03PM -0300, Jason Gunthorpe wrote:
> > Yes. This will blow up badly on many platforms, as sq->queue
> > might be vmapped, ioremapped, come from a pool without page backing.
>
> Gah, this addr gets fed into io_remap_pfn_range/remap_pfn_range too..
>
> Potnuri, you sh
> drivers/infiniband/hw/cxgb4/qp.c
>129 static int alloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq)
>130 {
>131 sq->queue = dma_alloc_coherent(&(rdev->lldi.pdev->dev),
> sq->memsize,
>132 &(sq->dma_addr), GFP_KERNEL);
>1
On Fri, Jun 14, 2019 at 05:30:32PM +0200, Greg KH wrote:
> On Fri, Jun 14, 2019 at 04:48:57PM +0200, Christoph Hellwig wrote:
> > On Fri, Jun 14, 2019 at 04:02:39PM +0200, Greg KH wrote:
> > > Perhaps a hint as to how we can fix this up? This is the first time
> > > I&
On Fri, Jun 14, 2019 at 04:05:33PM +0100, Robin Murphy wrote:
> That said, I don't believe this particular patch should make any
> appreciable difference - alloc_pages_exact() is still going to give back
> the same base address as the rounded up over-allocation would, and
> PAGE_ALIGN()ing the s
On Fri, Jun 14, 2019 at 03:01:22PM +, David Laight wrote:
> I'm pretty sure there is a lot of code out there that makes that assumption.
> Without it many drivers will have to allocate almost double the
> amount of memory they actually need in order to get the required alignment.
> So instead o
On Fri, Jun 14, 2019 at 02:15:44PM +, David Laight wrote:
> Does this still guarantee that requests for 16k will not cross a 16k boundary?
> It looks like you are losing the alignment parameter.
The DMA API never gave you alignment guarantees to start with,
and you can get not naturally aligne
On Fri, Jun 14, 2019 at 04:02:39PM +0200, Greg KH wrote:
> Perhaps a hint as to how we can fix this up? This is the first time
> I've heard of the comedi code not handling dma properly.
It can be fixed by:
a) never calling virt_to_page (or vmalloc_to_page for that matter)
on dma allocation
Remove usage of the legacy drm PCI DMA wrappers, and with that the
incorrect usage cocktail of __GFP_COMP, virt_to_page on DMA allocation
and SetPageReserved.
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/i915/i915_gem.c| 30 +-
drivers/gpu/drm/i915
We are not allowed to call virt_to_page on pages returned from
dma_alloc_coherent, as in many cases the virtual address returned
is aactually a kernel direct mapping. Also there generally is no
need to mark dma memory as reserved.
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/drm_bufs.c
dma_alloc_coherent is not just the page allocator. The only valid
arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible
modifiers of __GFP_NORETRY or __GFP_NOWARN.
Signed-off-by: Christoph Hellwig
---
drivers/net/ethernet/broadcom/cnic.c | 4 ++--
1 file changed, 2 insertions
dma_alloc_coherent is not just the page allocator. The only valid
arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible
modifiers of __GFP_NORETRY or __GFP_NOWARN.
Signed-off-by: Christoph Hellwig
---
drivers/s390/net/ism_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion
This fits in with the naming scheme used by alloc_pages_node.
Signed-off-by: Christoph Hellwig
---
include/linux/gfp.h | 2 +-
mm/page_alloc.c | 4 ++--
mm/page_ext.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index
.
Signed-off-by: Christoph Hellwig
---
arch/arm/mm/dma-mapping.c | 17 -
kernel/dma/mapping.c | 9 +
2 files changed, 9 insertions(+), 17 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 0a75058c11f3..86135feb2c05 100644
--- a/arch
No need to duplicate the logic over two functions that are almost the
same.
Signed-off-by: Christoph Hellwig
---
include/linux/gfp.h | 5 +++--
mm/page_alloc.c | 39 +++
2 files changed, 10 insertions(+), 34 deletions(-)
diff --git a/include/linux/gfp.h
dma_alloc_coherent is not just the page allocator. The only valid
arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible
modifiers of __GFP_NORETRY or __GFP_NOWARN.
Signed-off-by: Christoph Hellwig
---
drivers/infiniband/hw/hfi1/init.c | 22 +++---
1 file changed
dma_alloc_coherent is not just the page allocator. The only valid
arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible
modifiers of __GFP_NORETRY or __GFP_NOWARN.
Signed-off-by: Christoph Hellwig
---
drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 3 +--
drivers/net/wireless
dma_alloc_coherent is not just the page allocator. The only valid
arguments to pass are either GFP_ATOMIC or GFP_ATOMIC with possible
modifiers of __GFP_NORETRY or __GFP_NOWARN.
Signed-off-by: Christoph Hellwig
---
drivers/infiniband/hw/qib/qib_iba6120.c | 2 +-
drivers/infiniband/hw/qib
as well.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-contiguous.h | 8 +---
kernel/dma/contiguous.c| 17 +++--
2 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index c05d4e661489
comedi_buf.c abuse the DMA API in gravely broken ways, as it assumes it
can call virt_to_page on the result, and the just remap it as uncached
using vmap. Disable the driver until this API abuse has been fixed.
Signed-off-by: Christoph Hellwig
---
drivers/staging/comedi/Kconfig | 1 +
1 file
Remove usage of the legacy drm PCI DMA wrappers, and with that the
incorrect usage cocktail of __GFP_COMP, virt_to_page on DMA allocation
and SetPageReserved.
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/ati_pcigart.c | 27 +++
include/drm/ati_pcigart.h | 5
The memory returned from dma_alloc_coherent is opaqueue to the user,
thus the exact way of page refcounting shall not matter either.
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/drm_bufs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_bufs.c b
Hi all,
various architectures have used exact memory allocations for dma
allocations for a long time, but x86 and thus the common code based
on it kept using our normal power of two allocator, which tends to
waste a lot of memory for certain allocations.
Switching to a slightly cleaned up alloc_p
These functions are rather broken in that they try to pass __GFP_COMP
to dma_alloc_coherent, call virt_to_page on the return value and
mess with PageReserved. And not actually used by any modern driver.
Signed-off-by: Christoph Hellwig
---
drivers/gpu/drm/drm_bufs.c | 85
management inside the DMA
allocator is hidden from the callers.
Fixes: a8f3c203e19b ("[media] videobuf-dma-contig: add cache support")
Signed-off-by: Christoph Hellwig
---
drivers/media/v4l2-core/videobuf-dma-contig.c | 23 +++
1 file changed, 8 insertions(+), 15
On Wed, Jun 12, 2019 at 08:42:36AM +0200, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom
>
> This is basically apply_to_page_range with added functionality:
> Allocating missing parts of the page table becomes optional, which
> means that the function can be guaranteed not to error if
On Wed, Jun 12, 2019 at 04:23:50AM -0700, Christoph Hellwig wrote:
> friends. Also in general new core functionality like this should go
> along with the actual user, we don't need to repeat the hmm disaster.
Ok, I see you actually did that, it just got hidden by the awful
selective
On Wed, Jun 12, 2019 at 08:42:37AM +0200, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom
>
> Add two utilities to a) write-protect and b) clean all ptes pointing into
> a range of an address space.
> The utilities are intended to aid in tracking dirty pages (either
> driver-allocated s
If you (and a few others actors in the thread) want people to actually
read what you wrote please follow proper mailing list ettiquette. I've
given up on reading all the recent mails after scrolling through two
pages of full quotes.
On Thu, May 23, 2019 at 10:37:19PM -0400, Qian Cai wrote:
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> index bf6c3500d363..5c567b81174f 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
> @@ -747,6 +747,13
On Mon, Jan 14, 2019 at 07:31:57PM +0300, Eugeniy Paltsev wrote:
> ARC HSDK SoC has Vivante GPU IP so allow build etnaviv for ARC.
>
> Signed-off-by: Eugeniy Paltsev
> ---
> drivers/gpu/drm/etnaviv/Kconfig | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/
On Wed, Jan 16, 2019 at 07:30:02AM +0100, Gerd Hoffmann wrote:
> Hi,
>
> > + if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents,
> > + DMA_BIDIRECTIONAL)) {
> > + ret = -EFAULT;
> > + goto fail_free_sgt;
> > + }
>
> Hmm, so it seems the ar
Hmm, I wonder if we are not actually using swiotlb in the end,
can you check if your dmesg contains this line or not?
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
If not I guess we found a bug in swiotlb exit vs is_swiotlb_buffer,
and you can try this patch:
diff --git a/kernel/dma/
1 - 100 of 379 matches
Mail list logo