I addition to Michael's suggestions, you can also check out this
tutorial which shows how to use lapply into EC2.
http://www.rinfinance.com/agenda/2012/workshop/WhitArmstrong.pdf
Unfortunately, rzmq is not available on windows, so this may not be
the best solution for your setup.
-Whit
On Fri,
On Fri, Sep 14, 2012 at 7:22 PM, Bazman76 wrote:
> Thanks for that I hadn't realised parallel could run on windows Pc's.
>
> The code is differential evolution but its not part of a package.
>
> Still I would like to be able to use cloud computing if possible, any
> thoughts on the easiest way to
Thanks for that I hadn't realised parallel could run on windows Pc's.
The code is differential evolution but its not part of a package.
Still I would like to be able to use cloud computing if possible, any
thoughts on the easiest way to achieve that using a windows based PC?
Found this blog whic
On Fri, Sep 14, 2012 at 6:00 PM, Bazman76 wrote:
> Hi there,
>
> I have a largish optimisation problem (10 years of daily observations).
>
> I want to optimise between 4 and 6 parameters.
>
> I'd like to utilise parallel computing if I can as I will have to run it
> with different starting values
Hi there,
I have a largish optimisation problem (10 years of daily observations).
I want to optimise between 4 and 6 parameters.
I'd like to utilise parallel computing if I can as I will have to run it
with different starting values etc.
I have a quad core PC with 16GB ram running windows 7.
H
For cross-validation, the caret package was designed to easily go
between sequential and parallel processing (using nws, mpi or anything
else).
See the last examples in ?train.
Max
On Jun 26, 2009, at 11:28 AM, Michael wrote:
I guess when we move to Amazon AWS,
we have to rewrite the
On Fri, Jun 26, 2009 at 8:28 AM, Michael wrote:
> I guess when we move to Amazon AWS,
>
> we have to rewrite the whole R programs?
Not necessarily. I use foreach (currently available in our REvolution
R Enterprise distribution and coming very soon to CRAN), and test out
the parallel code on my dua
losemind wrote:
>
>
> Moreover, at my PC level, I have a 4-core PC, is there anything we
> could do in R to speed up my CV programs?
>
>
I have seen one very nice paper that compared parallelization options for R:
http://epub.ub.uni-muenchen.de/8991/
losemind wrote:
>
>
> we have to re
I guess when we move to Amazon AWS,
we have to rewrite the whole R programs?
On Fri, Jun 26, 2009 at 8:05 AM, Dirk Eddelbuettel wrote:
>
> On 26 June 2009 at 07:40, Michael wrote:
> | Hi all,
> |
> | Lots of big IT companies are renting out their computing facilities.
> | Amazon has one such serv
On 26 June 2009 at 07:40, Michael wrote:
| Hi all,
|
| Lots of big IT companies are renting out their computing facilities.
| Amazon has one such service. In my understanding, this will
| dramatically improve the speed of my R program -- currently the cross
| validation and model selection part i
Hi all,
Lots of big IT companies are renting out their computing facilities.
Amazon has one such service. In my understanding, this will
dramatically improve the speed of my R program -- currently the cross
validation and model selection part is the bottle neck. It take a few
days to just finish o
pnmath currently uses up to 8 threads (i.e. 1, 2, 4, or 8).
getNumPnmathThreads() should tell you the maximum number used on your
system, which should be 8 if the number of processors is being
identified correctly. With the size of m this calculation should be
using 8 threads, but the exp calcula
"Juan Pablo Romero Méndez" <[EMAIL PROTECTED]> writes:
> Just out of curiosity, what system do you have?
>
> These are the results in my machine:
>
>> system.time(exp(m), gcFirst=TRUE)
>user system elapsed
>0.520.040.56
>> library(pnmath)
>> system.time(exp(m), gcFirst=TRUE)
>
Just out of curiosity, what system do you have?
These are the results in my machine:
> system.time(exp(m), gcFirst=TRUE)
user system elapsed
0.520.040.56
> library(pnmath)
> system.time(exp(m), gcFirst=TRUE)
user system elapsed
0.660 0.016 0.175
Juan Pablo
>
>> syst
On Mon, 30 Jun 2008, Juan Pablo Romero Méndez wrote:
Thanks!
It turned out that Rmpi was a good option for this problem after all.
To help with improving snow I'd be interested to hear more bout why
Rmpi works for you but snow did not.
Nevetheless, pnmath seems very promising, although it
"Juan Pablo Romero Méndez" <[EMAIL PROTECTED]> writes:
> Thanks!
>
> It turned out that Rmpi was a good option for this problem after all.
>
> Nevetheless, pnmath seems very promising, although it doesn't load in my
> system:
>
>
>> library(pnmath)
> Error in dyn.load(file, DLLpath = DLLpath, ...
Thanks!
It turned out that Rmpi was a good option for this problem after all.
Nevetheless, pnmath seems very promising, although it doesn't load in my system:
> library(pnmath)
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared library
'/home/jpablo/extra/R-271/lib/R/libr
"Juan Pablo Romero Méndez" <[EMAIL PROTECTED]> writes:
> Hello,
>
> The problem I'm working now requires to operate on big matrices.
>
> I've noticed that there are some packages that allows to run some
> commands in parallel. I've tried snow and NetWorkSpaces, without much
> success (they are far
Hello,
The problem I'm working now requires to operate on big matrices.
I've noticed that there are some packages that allows to run some
commands in parallel. I've tried snow and NetWorkSpaces, without much
success (they are far more slower that the normal functions)
My problem is very simple,
Hi Tim,
I think you should have a look at this Rmpi Tutorial
http://ace.acadiau.ca/math/ACMMaC/Rmpi/
and to Luke Tierney's webpage:
http://www.cs.uiowa.edu/~luke/R/cluster/uiowasnow.html
Best,
Markus
Tim Smith schrieb:
> Hi,
>
> I had access to an hpc cluster, and wanted to parallelize some of
Hi,
I had access to an hpc cluster, and wanted to parallelize some of my R code. I
looked at the snow,nws, rscalapack documentation but was unable to make out how
I should submit my job to the hpc, and how I should code a simple program. For
example, if I had 10 matrices, and 10 processor how s
21 matches
Mail list logo