On 24.07.2011 20:15, Madana_Babu wrote:
Hi Lei,
Thanks for your solution. It worked. Now I have another query.
After creating multiple DF[[i]]'s how do I aggregate them into one data frame
say DF (I want to bind all the data frames into one data frame). I have more
than 1000 DF[[i]]'s how ca
Hi Lei,
Thanks for your solution. It worked. Now I have another query.
After creating multiple DF[[i]]'s how do I aggregate them into one data frame
say DF (I want to bind all the data frames into one data frame). I have more
than 1000 DF[[i]]'s how can I bind them into one DF by recursively?
T
Madana,
The code below may work (untested though):
#above is the same as you wrote
require(multicore)
read.data.exmple <- function(f)
{
dat <- read.csv(f, header=FALSE, sep="\t", na.strings="",dec=".",
strip.white=TRUE, fill=TRUE)
data_1 <- sqldf("SELECT V2, V14, MIN(V16) FROM dat
library(multicore)
options(cores=10)
getOption("cores")
On Fri, Jul 22, 2011 at 11:35 AM, Madana_Babu wrote:
> Hi,
>
> Can you please explain me that how can i perform this on a multicore
> processor? since i have a machine with 16-cores. I can do this much faster
> if i use all cores.
>
> Tha
Hi,
Can you please explain me that how can i perform this on a multicore
processor? since i have a machine with 16-cores. I can do this much faster
if i use all cores.
Thanks in advance...
Regards,
Madana
--
View this message in context:
http://r.789695.n4.nabble.com/R-on-Multicore-for-Linux-t
On Thu, Jul 21, 2011 at 3:20 PM, Madana_Babu wrote:
> Hi all,
>
> Currently i am trying to this on R which is running on multicore processor.
> I am not sure how to use mclapply() function on this task. Can anyone help
> me.
>
>
> # Setting up directory
> setwd("/XXX////2011/07/20"
Hi all,
Currently i am trying to this on R which is running on multicore processor.
I am not sure how to use mclapply() function on this task. Can anyone help
me.
# Setting up directory
setwd("/XXX////2011/07/20")
library(sqldf)
# Data is available in the form of multiple struct
On Thu, Jul 21, 2011 at 1:44 AM, Madana_Babu wrote:
> Hi all,
>
> I have R installed on a box, which is running on a machine with 16 core and
> Redhat - Linux. I am handling huge (size of dataset will be 5 GB) dataset.
> Lets assume that my data is in the form of structured (multiple) logs. I
> ac
I have good experiences with the foreach package, available on cran. It
includes some tutorials which might help you.
cheers,
Paul
On 07/20/2011 11:44 PM, Madana_Babu wrote:
> Hi all,
>
> I have R installed on a box, which is running on a machine with 16 core and
> Redhat - Linux. I am handling
Make this reproducible.
On Wed, Jul 20, 2011 at 6:44 PM, Madana_Babu wrote:
> Hi all,
>
> I have R installed on a box, which is running on a machine with 16 core and
> Redhat - Linux. I am handling huge (size of dataset will be 5 GB) dataset.
> Lets assume that my data is in the form of structure
Hi all,
I have R installed on a box, which is running on a machine with 16 core and
Redhat - Linux. I am handling huge (size of dataset will be 5 GB) dataset.
Lets assume that my data is in the form of structured (multiple) logs. I
access the data by using all.files(). Since by default basic versi
11 matches
Mail list logo