> On Apr 13, 2019, at 16:56, Iñaki Ucar <iu...@fedoraproject.org> wrote:
> 
> On Sat, 13 Apr 2019 at 18:41, Simon Urbanek <simon.urba...@r-project.org> 
> wrote:
>> 
>> Sure, but that a completely bogus argument because in that case it would 
>> fail even more spectacularly with any other method like PSOCK because you 
>> would *have to* allocate n times as much memory so unlike mclapply it is 
>> guaranteed to fail. With mclapply it is simply much more efficient as it 
>> will share memory as long as possible. It is rather obvious that any new 
>> objects you create can no longer be shared as they now exist separately in 
>> each process.
> 
> The point was that PSOCK fails and succeeds *consistently*,
> independently of what you do with the input in the function provided.
> I think that's a good property.
> 

So does parallel. It is consistent. If you do things that use too much memory 
you will consistently fail. That's a pretty universal rule, there is nothing 
probabilistic about it. It makes no difference if it's PSOCK, multicore, or 
anything else.

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to