Hi,
On ' how to use "top" inside the R prompt? '
you can use system('top') command.
Thanks,
On Sun, Jul 1, 2018 at 9:53 PM Benoit Vaillant
wrote:
> Hello,
>
> On Sun, Jul 01, 2018 at 11:31:29AM +, akshay kulkarni wrote:
> > I tried "top" at the bash prompt, but it provides a way to
Hello,
On Sun, Jul 01, 2018 at 11:31:29AM +, akshay kulkarni wrote:
> I tried "top" at the bash prompt, but it provides a way to measure
> CPU performance of the existing processes. I want to check the CPU
> usage of the execution of an R function.
Try to open two bash prompts, in one use R
and effort...
Yours sincerely,
AKSHAY M KULKARNI
From: Jeff Newmiller
Sent: Saturday, June 30, 2018 11:46 PM
To: r-help@r-project.org; akshay kulkarni; R help Mailing list
Subject: Re: [R] parallel processing in r...
Use "top" at the bash prompt.
Read
If you use gkrellm, you'll get a plot of each core's activity so it's
easy to see how many are being used.
yum install gkrellm.
HTH
On 07/01/2018 06:16 AM, Jeff Newmiller wrote:
> Use "top" at the bash prompt.
>
> Read about the "mc.cores" parameter to mclapply.
>
> Make a simplified example
Use "top" at the bash prompt.
Read about the "mc.cores" parameter to mclapply.
Make a simplified example version of your analysis and post your question in
the context of that example [1][2][3]. You will learn about the issues you are
dealing with in the process of trimming your problem, and
The effectiveness of parallelizing code, be it with mclapply or otherwise,
depends in large part on the code, which you failed to show.
I cannot answer your other question.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things
dear members,
I am using mclapply to parallelize my code. I am
using Red Hat Linux in AWS.
When I use mclapply, I see no speed increase. I doubt that the Linux OS is
allowing fewer than the maximum number of cores to mclapply ( by default,
mclapply takes all the
Helllo All,
Need some help understanding parallel processing. I set-up DoParallel and
worked perfectly. I tried to set-up using parallel package following the book
Parallel R but I get the following error:
Error in checkForRemoteErrors(val) :
4 nodes produced errors; first error: 'what'
Dear community,
Sory for cross posting. Does anybody have an idea on how I can do parallel
in MATLAB?
thanks for your help
--
John Wasige
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more,
Sorry, I think you posted to the wrong group.
Ranjan
On Sat, 11 Apr 2015 19:01:04 +0200 John Wasige johnwas...@gmail.com wrote:
Dear community,
Sory for cross posting. Does anybody have an idea on how I can do parallel
in MATLAB?
thanks for your help
--
John Wasige
Wrong mailinglist. This one is about R, not matlab.
Op 11-apr.-2015 19:03 schreef John Wasige johnwas...@gmail.com:
Dear community,
Sory for cross posting. Does anybody have an idea on how I can do parallel
in MATLAB?
thanks for your help
--
John Wasige
[[alternative HTML
Platform: Windows 7
Package: parallel
Function: parLapply
I am running a lengthy program with 8 parallel processes running in main
memory.
The processes save data using the 'save' function, to distinct files so
that no conflicts writing to the same file are possible.
I have been getting errors
Subject: [R] Parallel processing random 'save' error
Platform: Windows 7
Package: parallel
Function: parLapply
I am running a lengthy program with 8 parallel processes running in main
memory.
The processes save data using the 'save' function, to distinct files so
that no conflicts
Hi,
I am trying to parallel computing with foreach function, but not able to
get the result. I know that in parallel processing, all result is collected
in list format, but I am not able to get input there.
Any help is really appreciated.
esf.m -foreach (i = 1:n.s, .combine=rbind) %dopar% {
It seems you don't quite understand how foreach works. foreach (..)
%dopar% { ... } takes the last value from each of the second {...}
evaluations and feeds them to the .combine function (in your case
rbind()). Since your last call in the %dopar% {...} block is assign(),
you are not getting
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/04/12 20:06, David Schaefer wrote:
Hello,
I would like to run some code in parallel with each cluster reading/writing
to a different
working directory. I've tried the following code without success. The error
I get is: Error
in
Hi Rainier,
Thanks for your suggestions. I should have been more specific, I am using
multiple cores on a Mac Pro running Snow Leopard. I can see where that makes a
difference.
--David
On 4/18/12 12:13 AM, Rainer M Krug r.m.k...@gmail.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash:
Hello,
I would like to run some code in parallel with each cluster reading/writing to
a different working directory. I've tried the following code without success.
The error I get is: Error in setwd(x) : cannot change working directory
library(parallel)
dirs - list(out1,out2,out3) # these
On Tue, Apr 17, 2012 at 11:06:05AM -0700, David Schaefer wrote:
Hello,
I would like to run some code in parallel with each cluster reading/writing
to a different working directory. I've tried the following code without
success. The error I get is: Error in setwd(x) : cannot change working
On 04/17/2012 11:06 AM, David Schaefer wrote:
Hello,
I would like to run some code in parallel with each cluster reading/writing to a
different working directory. I've tried the following code without success. The error I
get is: Error in setwd(x) : cannot change working directory
Hi Sandeep,
still missing an answer? Perhaps you cross check your post with the
rules of the posting guide and find what is missing at all here.
Anyway, depending on your OS, package multicore, snow/snowfall
may fit your needs - but you have to re-formulate your loop using
adequate multicore
On 10/11/2011 12:13 PM, Sandeep Patil wrote:
I have an R script that consists of a for loop
that repeats a process for many different files.
I want to process this parallely on machine with
multiple cores, is there any package for it ?
Thanks
...I mostly use the foreach package...
I have an R script that consists of a for loop
that repeats a process for many different files.
I want to process this parallely on machine with
multiple cores, is there any package for it ?
Thanks
--
Sandeep R Patil
[[alternative HTML version deleted]]
I'm running 10, 000 iterations each for the bridge and blasso. 3, 000
iterations roughly takes a week in a core-duo processor with 16GB RAM. I'll
have access to a 6C processor machine and I came across the multicore package.
Can I use multicore with the bridge and blasso function? That is,
Hi David,
On Wed, Feb 9, 2011 at 10:11 AM, Robinson, David G dro...@sandia.gov wrote:
Steve,
Thanks for taking the time to look at the question. my apologies for the
confusing post. In an attempt to keep the post short, I seem to have
confused the issue.
The variable of interest in each
Steve,
Thanks for taking the time to look at the question. my apologies for the
confusing post. In an attempt to keep the post short, I seem to have confused
the issue.
The variable of interest in each iteration is the vector lambda and the goal
is to collect all the lambda vectors and
Hi David,
I'm CC-ing R-help inorder to finish this one off ;-)
On Wed, Feb 9, 2011 at 10:59 AM, Robinson, David G dro...@sandia.gov wrote:
[snip]
One of you comments pointed me in the right direction and I found the
problem. I simply commented out the line if (j%%100==0) { ...print(N)}
and
I am experimenting with parallel processing using foreach and seem to be
missing something fundamental. Cool stuff. I've gone through the list and
seen a couple of closely related issues, but nothing I've tried seems to
work.
I know that the results from foreach are combined, but what if there is
Hi,
On Tue, Feb 8, 2011 at 6:18 PM, Robinson, David G dro...@sandia.gov wrote:
I am experimenting with parallel processing using foreach and seem to be
missing something fundamental. Cool stuff. I've gone through the list and
seen a couple of closely related issues, but nothing I've tried
1.what is the application to install for to speed up processing for
multicore processor in windows environment?
2. how to compute time for executing a particular a code?
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
Hello Partha,
Both questions are answered here:
http://www.r-statistics.com/2010/04/parallel-multicore-processing-with-r-on-windows/
http://www.r-statistics.com/2010/04/parallel-multicore-processing-with-r-on-windows/I
would also recommend you to have a look here:
Does anybody have any suggestions regarding applying standard regression
packages lm(), hccm(), and others within a parallel environment? Most of
the packages I've found only deal with iterative processes (bootstrap) or
simple linear algebra. While the latter might help, I'd rather not program
32 matches
Mail list logo