: Saturday, November 5, 2022 1:24 AM
To: r-help@r-project.org ; akshay kulkarni
; R help Mailing list
Subject: Re: [R] on parallel processing...
You don't specify processors. Just invoke the worker functions with the
relevant packages and they will be allocated according to how you defined the
cluster
You don't specify processors. Just invoke the worker functions with the
relevant packages and they will be allocated according to how you defined the
cluster object... typically automatically. Processors are usually specified (to
the cluster object) according to IP address. Cores within the
Dear members,
I want to send the same function with different
arguments to different processors. This solution was provided in Stack Overflow
https://stackoverflow.com/questions/25045998/send-function-calls-with-different-arguments-to-different-processors-in-r-using
dear R experts---
I am experimenting with multicore processing, so far with pretty
disappointing results. Here is my simple example:
A - 10
randvalues - abs(rnorm(A))
minfn - function( x, i ) { log(abs(x))+x^3+i/A+randvalues[i] } ## an
arbitrary function
ARGV -
On 02.07.2011 19:32, ivo welch wrote:
dear R experts---
I am experimenting with multicore processing, so far with pretty
disappointing results. Here is my simple example:
A- 10
randvalues- abs(rnorm(A))
minfn- function( x, i ) { log(abs(x))+x^3+i/A+randvalues[i] } ## an
arbitrary
thank you, uwe. this is a little disappointing. parallel processing
for embarrassingly simple parallel operations--those needing no
communication---should be feasible if the thread is not always created
and released, but held. is there light-weight parallel processing
that could facilitate
On 02.07.2011 20:04, ivo welch wrote:
thank you, uwe. this is a little disappointing. parallel processing
for embarrassingly simple parallel operations--those needing no
communication---should be feasible if the thread is not always created
and released, but held. is there light-weight
hi uwe--I did not know what snow was. from my 1 minute reading, it
seems like a much more involved setup that is much more flexible after
the setup cost has been incurred (specifically, allowing use of many
machines).
the attractiveness of the doMC/foreach framework is its simplicity of
On 02.07.2011 20:42, ivo welch wrote:
hi uwe--I did not know what snow was. from my 1 minute reading, it
seems like a much more involved setup that is much more flexible after
the setup cost has been incurred (specifically, allowing use of many
machines).
the attractiveness of the
Here's another datapoint using the multicore package -- which is what
the foreach/doMC combo uses internally:
I halved your A value to 50,000 because I was getting impatient :-)
A=5
randvalues - abs(rnorm(A))
minfn - function( x, i ) { log(abs(x))+x^3+i/A+randvalues[i] }
system.time(a1 -
Hi;
I have a R script that includes a call to genoud(); genoud process lasts
about 4 seconds, what would be OK if I hadn't have to call it about 2000
times. This yields about 2 hours of processing.
And I would like to use this script operationally; so that it should be
run twice a day. It seems to
Hi Javier
The Rmpi or snow packages might help, e.g., mpi.parLapply; you need to
pay attention to what gets (explicitly or implicitly) shared with
other nodes.
Martin
[EMAIL PROTECTED] writes:
Hi;
I have a R script that includes a call to genoud(); genoud process lasts
about 4 seconds, what
12 matches
Mail list logo