RE: [PATCH v3] staging: android: ion: Add implementation of dma_buf_vmap and dma_buf_vunmap

2019-01-04 Thread Skidanov, Alexey



> -Original Message-
> From: Liam Mark [mailto:lm...@codeaurora.org]
> Sent: Friday, January 04, 2019 19:42
> To: Skidanov, Alexey 
> Cc: Laura Abbott ; Greg KH ;
> de...@driverdev.osuosl.org; tk...@android.com; r...@android.com; linux-
> ker...@vger.kernel.org; m...@android.com; sumit.sem...@linaro.org
> Subject: Re: [PATCH v3] staging: android: ion: Add implementation of 
> dma_buf_vmap and
> dma_buf_vunmap
> 
> On Tue, 18 Dec 2018, Alexey Skidanov wrote:
> 
> > >>> I was wondering if we could re-open the discussion on adding support to
> > >>> ION for dma_buf_vmap.
> > >>> It seems like the patch was not taken as the reviewers wanted more
> > >>> evidence of an upstream use case.
> > >>>
> > >>> Here would be my upstream usage argument for including dma_buf_vmap
> > >>> support in ION.
> > >>>
> > >>> Currently all calls to ion_dma_buf_begin_cpu_access result in the 
> > >>> creation
> > >>> of a kernel mapping for the buffer, unfortunately the resulting call to
> > >>> alloc_vmap_area can be quite expensive and this has caused a performance
> > >>> regression for certain clients when they have moved to the new version 
> > >>> of
> > >>> ION.
> > >>>
> > >>> The kernel mapping is not actually needed in 
> > >>> ion_dma_buf_begin_cpu_access,
> > >>> and generally isn't needed by clients. So if we remove the creation of 
> > >>> the
> > >>> kernel mapping in ion_dma_buf_begin_cpu_access and only create it when
> > >>> needed we can speed up the calls to ion_dma_buf_begin_cpu_access.
> > >>>
> > >>> An additional benefit of removing the creation of kernel mappings from
> > >>> ion_dma_buf_begin_cpu_access is that it makes the ION code more secure.
> > >>> Currently a malicious client could call the DMA_BUF_IOCTL_SYNC IOCTL 
> > >>> with
> > >>> flags DMA_BUF_SYNC_END multiple times to cause the ION buffer kmap_cnt 
> > >>> to
> > >>> go negative which could lead to undesired behavior.
> > >>>
> > >>> One disadvantage of the above change is that a kernel mapping is not
> > >>> already created when a client calls dma_buf_kmap. So the following
> > >>> dma_buf_kmap contract can't be satisfied.
> > >>>
> > >>> /**
> > >>> * dma_buf_kmap - Map a page of the buffer object into kernel address
> > >>> space. The
> > >>> * same restrictions as for kmap and friends apply.
> > >>> * @dmabuf:  [in]buffer to map page from.
> > >>> * @page_num:[in]page in PAGE_SIZE units to map.
> > >>> *
> > >>> * This call must always succeed, any necessary preparations that might
> > >>> fail
> > >>> * need to be done in begin_cpu_access.
> > >>> */
> > >>>
> > >>> But hopefully we can work around this by moving clients to dma_buf_vmap.
> > >> I think the problem is with the contract. We can't ensure that the call
> > >> is always succeeds regardless the implementation - any mapping might
> > >> fail. Probably this is why  *all* clients of dma_buf_kmap() check the
> > >> return value (so it's safe to return NULL in case of failure).
> > >>
> > >
> > > I think currently the call to dma_buf_kmap will always succeed since the
> > > DMA-Buf contract requires that the client first successfully call
> > > dma_buf_begin_cpu_access(), and if dma_buf_begin_cpu_access() succeeds
> > > then dma_buf_kmap will succeed.
> > >
> > >> I would suggest to fix the contract and to keep the dma_buf_kmap()
> > >> support in ION.
> > >
> > > I will leave it to the DMA-Buf maintainers as to whether they want to
> > > change their contract.
> > >
> > > Liam
> > >
> > > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> > > a Linux Foundation Collaborative Project
> > >
> >
> > Ok. We need the list of the clients using the ION in the mainline tree.
> >
> 
> Looks to me like the only functions which might be calling
> dma_buf_kmap/dma_buf_kunmap on ION buffers are
> tegra_bo_kmap/tegra_bo_kunmap, I assume Tegra is used in some Android
> automotive products.
> 
> Looks like these functions could be moved over to using
> dma_buf_vmap/dma_buf_vunmap but it wouldn't be very clean and would add a
> performance hit.
> 
> Liam
> 
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project

I'm a little bit confused. Why making the buffer accessible by CPU (mapping the 
buffer)
and making the content of the buffer valid (coherent) are so tightly coupled in 
DMA-BUF? 

Alexey


CMA allocation failure

2018-01-27 Thread Skidanov, Alexey
Hello,

I enabled CMA global area to be 8GB. Immediately after boot, I'm able to 
allocate the really big contiguous chunks of memory (about 8GB). But several 
minutes after the boot, there is some degradation in an ability of CMA to 
allocate contiguous memory buffers:

[ 2333.037004] cma: cma_alloc(): memory range at ea000f0a is busy, 
retrying
[ 2333.037005] cma: cma_alloc: alloc failed, req-size: 50 pages, ret: -16
[ 2333.037006] cma: number of available pages: 6@122+9@151+9@183+2096848@304=> 
2096872 free of 2097152 total pages
[ 2333.037034] cma: cma_alloc(): returned (null)

The request to allocate 50 pages (~2GB) is failed while the block of 
2096848 contiguous pages are available.

One of the failure reasons is the user allocated pinned pages in the middle of 
the CMA reserved range. Can I somehow verify it? Can I get the details of such 
pinned pages (number of pages, source of pinning, etc. ... )?

Thanks,
Alexey
-
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.



32 bit user space compatibility

2014-10-26 Thread Skidanov, Alexey
Hi,

Running 32 bit user space needs some work to be done with ioctls. I understand 
that there are two options to implement:
1.   Use only fixed size types. Pad IOCTLS params to multiple of 64 bits - 
simple; don't know if it covers all compatibility issues; 
2.   32 bit compatibility layer (through compat_ioctl, just like many 
drivers in kernel implement)  - just a little bit simple code with some 
translations; really covers all issues;
 
Which one is preferred by kernel community?

Thanks
Alexey
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: CONFIG_PREEMPT_COUNT

2014-03-31 Thread Skidanov, Alexey
Paul,
Thank you for your answer. We have PREEMT_VOLUNTARY  defined, so that is the 
reason why the CONFIG_PREEMPT_COUNT is undefined. 
The second question (that was badly formatted in original message) was:

In the previous kernel versions, spin_lock() and spin_unlock() call 
preempt_enable() and preempt_disable() respectively,
ensuring that the current task will not be preempted by any other task while 
lock is held. Is this still ensured with these macros defined as “do nothing” ?

-Original Message-
From: Paul Bolle [mailto:pebo...@tiscali.nl] 
Sent: Monday, March 31, 2014 3:25 PM
To: Skidanov, Alexey
Cc: linux-kernel@vger.kernel.org; Gabbay, Oded
Subject: Re: CONFIG_PREEMPT_COUNT

Alexey,

On Mon, 2014-03-31 at 10:38 +, Skidanov, Alexey wrote:
> What is the purpose of CONFIG_PREEMPT_COUNT define and how I can 
> define it?

CONFIG_PREEMPT_COUNT is a Kconfig macro, ie it's (in short) to be defined as a 
result one of the "*config" make targets. And doing
git grep -nw PREEMPT_COUNT

gives
kernel/Kconfig.preempt:38:  select PREEMPT_COUNT
kernel/Kconfig.preempt:57:config PREEMPT_COUNT
lib/Kconfig.debug:964:  select PREEMPT_COUNT

And looking at those hits it seems PREEMPT_COUNT will be set, and therefore 
CONFIG_PREEMPT_COUNT defined, if either PREEMPT or DEBUG_ATOMIC_SLEEP are set 
in a .config.

(I'm not familiar with PREEMPT_COUNT. I think I never used it.)

Hope this helps,


Paul Bolle




CONFIG_PREEMPT_COUNT

2014-03-31 Thread Skidanov, Alexey
Hello,

I found that preempt_enable() and preempt_disable() macros are defined 
differently depending on CONFIG_PREEMPT_COUNT:
- They increment per CPU preemption counter, if CONFIG_PREEMPT_COUNT defined
- They do nothing (just call barrier), if not.
I have it undefined in my source tree.

The questions are:
1. What is the purpose of CONFIG_PREEMPT_COUNT define and how I can define it?
2. In the previous kernel versions, spin_lock() and spin_unlock() call 
preempt_enable() and preempt_disable() respectively, ensuring that the current 
task will not be preempted by any other task while lock is held. Is this still 
ensured with these macros defined as "do nothing" ?

Thanks
Alexey

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/