On 02/21/2017 04:41 PM, Michal Hocko wrote:
> On Fri 17-02-17 17:11:57, Anshuman Khandual wrote:
> [...]
>> * User space using mbind() to get CDM memory is an additional benefit
>>   we get by making the CDM plug in as a node and be part of the buddy
>>   allocator. But the over all idea from the user space point of view
>>   is that the application can allocate any generic buffer and try to
>>   use the buffer either from the CPU side or from the device without
>>   knowing about where the buffer is really mapped physically. That
>>   gives a seamless and transparent view to the user space where CPU
>>   compute and possible device based compute can work together. This
>>   is not possible through a driver allocated buffer.
> 
> But how are you going to define any policy around that. Who is allowed

The user space VMA can define the policy with a mbind(MPOL_BIND) call
with CDM/CDMs in the nodemask.

> to allocate and how much of this "special memory". Is it possible that

Any user space application with mbind(MPOL_BIND) call with CDM/CDMs in
the nodemask can allocate from the CDM memory. "How much" gets controlled
by how we fault from CPU and the default behavior of the buddy allocator.

> we will eventually need some access control mechanism? If yes then mbind

No access control mechanism is needed. If an application wants to use
CDM memory by specifying in the mbind() it can. Nothing prevents it
from using the CDM memory.

> is really not suitable interface to (ab)use. Also what should happen if
> the mbind mentions only CDM memory and that is depleted?

IIUC *only CDM* cannot be requested from user space as there are no user
visible interface which can translate to __GFP_THISNODE. MPOL_BIND with
CDM in the nodemask will eventually pick a FALLBACK zonelist which will
have zones of the system including CDM ones. If the resultant CDM zones
run out of memory, we fail the allocation request as usual.

> 
> Could you also explain why the transparent view is really better than
> using a device specific mmap (aka CDM awareness)?

Okay with a transparent view, we can achieve a control flow of application
like the following.

(1) Allocate a buffer:          alloc_buffer(buf, size)
(2) CPU compute on buffer:      cpu_compute(buf, size)
(3) Device compute on buffer:   device_compute(buf, size)
(4) CPU compute on buffer:      cpu_compute(buf, size)
(5) Release the buffer:         release_buffer(buf, size)

With assistance from a device specific driver, the actual page mapping of
the buffer can change between system RAM and device memory depending on
which side is accessing at a given point. This will be achieved through
driver initiated migrations.

>  
>> * The placement of the memory on the buffer can happen on system memory
>>   when the CPU faults while accessing it. But a driver can manage the
>>   migration between system RAM and CDM memory once the buffer is being
>>   used from CPU and the device interchangeably. As you have mentioned
>>   driver will have more information about where which part of the buffer
>>   should be placed at any point of time and it can make it happen with
>>   migration. So both allocation and placement are decided by the driver
>>   during runtime. CDM provides the framework for this can kind device
>>   assisted compute and driver managed memory placements.
>>
>> * If any application is not using CDM memory for along time placed on
>>   its buffer and another application is forced to fallback on system
>>   RAM when it really wanted is CDM, the driver can detect these kind
>>   of situations through memory access patterns on the device HW and
>>   take necessary migration decisions.
> 
> Is this implemented or at least designed?

Yeah, its being designed.

> 
> Btw. I believe that sending new versions of the patchset with minor
> changes is not really helping the review process. I believe the
> highlevel concerns about the API are not resolved yet and that is the
> number 1 thing to deal with currently.

Got it.

Reply via email to