All MPI implementations have support for using CMA to transfer data between 
local processes. The performance is fairly good (not as good as XPMEM) but the 
interface limits what we can do with to remote process memory (no atomics). I 
have not heard about this new proposal. What is the benefit of the proposed 
calls over the existing calls?

-Nathan

> On Feb 26, 2018, at 2:02 AM, Pavel Emelyanov <xe...@virtuozzo.com> wrote:
> 
> On 02/21/2018 03:44 AM, Andrew Morton wrote:
>> On Tue,  9 Jan 2018 08:30:49 +0200 Mike Rapoport <r...@linux.vnet.ibm.com> 
>> wrote:
>> 
>>> This patches introduces new process_vmsplice system call that combines
>>> functionality of process_vm_read and vmsplice.
>> 
>> All seems fairly strightforward.  The big question is: do we know that
>> people will actually use this, and get sufficient value from it to
>> justify its addition?
> 
> Yes, that's what bothers us a lot too :) I've tried to start with finding out 
> if anyone
> used the sys_read/write_process_vm() calls, but failed :( Does anybody know 
> how popular
> these syscalls are? If its users operate on big amount of memory, they could 
> benefit from
> the proposed splice extension.
> 
> -- Pavel
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Reply via email to