hanks
--
View this message in context:
http://r.789695.n4.nabble.com/R-Large-Dataset-Problem-tp4554469p4556188.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-h
When using 'scan' I had no problem reading a string that had 1000 'columns'
> x <- scan('/temp/tempxx.txt', what = list(0L, ''))
Read 14 records
>
> str(x)
List of 2
$ : int [1:14] 129876 129876 129876 129876 129876 129876 129876 129876
129876 129876 ...
$ : chr [1:14]
"10101011100000011
Le vendredi 13 avril 2012 à 05:44 -0700, efulas a écrit :
> Thank you very much for your helps guys. Both message help me to run the data
> in R. However, R is omitting many columns from my data. Am i missing
> something?
Please read the posting guide. If you don't provide the code you ran and
the
Thank you very much for your helps guys. Both message help me to run the data
in R. However, R is omitting many columns from my data. Am i missing
something?
Many Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/R-Large-Dataset-Problem-tp4554469p4554698.html
Sent from the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 13/04/12 14:20, Milan Bouchet-Valat wrote:
> Le vendredi 13 avril 2012 à 04:32 -0700, efulas a écrit :
>> Dear All,
>>
>> I have a problem with my data. First problem is that my data is really large
>> and R is
>> omitting some columns from my dat
my rows as a "Inf" because R read it as
a one number. I want them to be seperated like "1 0 1 0" . Is there
anyway to do this in R?
Many Thanks,
Efe
--
View this message in context:
http://r.789695.n4.nabble.com/R-Large-Dataset-Problem-tp4554469p4554469.html
Sent fro
Le vendredi 13 avril 2012 à 04:32 -0700, efulas a écrit :
> Dear All,
>
> I have a problem with my data. First problem is that my data is really
> large and R is omitting some columns from my data. Is there any way to read
> the whole data without omitting.
How did you import it? Please be precis
a "Inf" because R read it as
a one number. I want them to be seperated like "1 0 1 0" . Is there
anyway to do this in R?
Many Thanks,
Efe
--
View this message in context:
http://r.789695.n4.nabble.com/R-Large-Dataset-Problem-tp4554469p4554469.html
Sent from the
Thanks for the help! All the methods above worked well and cleared up some
misunderstandings.
Thanks!
Haakon =)
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat
Haakon,
as replicates imply that they all have the same data type, you can put
them into a matrix which is often faster and needs less memory (though
whether that can really matter depends of the number of replicates you
have: for small no of replicates you won't have much effect anyways).
Bu
To get the equivalent of what your loop does, you could use
lapply(data[,3:5],function(x)x/ave(x,data$plateNo,FUN=mean))
but you might find the output of
sapply(data[,3:5],function(x)x/ave(x,data$plateNo,FUN=mean))
to be more useful.
- Phil Spector
Hello all,
I'm new to R and trying to figure out how to perform calculations on a large
dataset (300 000 datapoints). I have already made some code to do this but it
is awfully slow. What I want to do is add a new column for each "rep_ " column
where I have taken each value and divide it by t
f working with raw
text files is something more than mere prescription.
Sincerely,
KeithC.
-Original Message-
From: Thomas Lumley [mailto:tlum...@u.washington.edu]
Sent: Monday, March 29, 2010 2:56 PM
To: Gabor Grothendieck
Cc: kMan; r-help; n.via...@libero.it
Subject: Re: [R] large dataset
On
strating why the 'bad practice' part of working with raw
text files is something more than mere prescription.
Sincerely,
KeithC.
-Original Message-
From: Thomas Lumley [mailto:tlum...@u.washington.edu]
Sent: Monday, March 29, 2010 2:56 PM
To: Gabor Grothendieck
Cc: kMan; r-help;
On Mon, 29 Mar 2010, Gabor Grothendieck wrote:
On Mon, Mar 29, 2010 at 4:12 PM, Thomas Lumley wrote:
On Sun, 28 Mar 2010, kMan wrote:
This was *very* useful for me when I dealt with a 1.5Gb text file
http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_la
rge_data/
Tw
On Sat, Mar 27, 2010 at 4:19 AM, n.via...@libero.it wrote:
> Hi I have a question,
> as im not able to import a csv file which contains a big dataset(100.000
> records) someone knows how many records R can handle without giving problems?
> What im facing when i try to import the file is that R ge
On Mon, Mar 29, 2010 at 4:12 PM, Thomas Lumley wrote:
> On Sun, 28 Mar 2010, kMan wrote:
>
>>> This was *very* useful for me when I dealt with a 1.5Gb text file
>>>
>>> http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_la
>>
>> rge_data/
>>
>> Two hours is a *very* long time
On Sun, 28 Mar 2010, kMan wrote:
This was *very* useful for me when I dealt with a 1.5Gb text file
http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_la
rge_data/
Two hours is a *very* long time to transfer a csv file to a db. The author
of the linked article has not docu
are now dealing with,
so there is plenty of help. I find myself smiling inside & wanting to say
"welcome!"
Sincerely,
KeithC.
-Original Message-
From: Khanh Nguyen [mailto:nguyen.h.kh...@gmail.com]
Sent: Saturday, March 27, 2010 8:59 AM
To: n.via...@libero.it
Cc: r-help
This was *very* useful for me when I dealt with a 1.5Gb text file
http://www.csc.fi/sivut/atcsc/arkisto/atcsc3_2007/ohjelmistot_html/R_and_large_data/
On Sat, Mar 27, 2010 at 5:19 AM, n.via...@libero.it wrote:
> Hi I have a question,
> as im not able to import a csv file which contains a big da
Try using read.csv.sql in sqldfSee example 13 on the sqldf home page:
http://code.google.com/p/sqldf/#Example_13._read.csv.sql_and_read.csv2.sql
Also read ?read.csv.sql
On Sat, Mar 27, 2010 at 5:19 AM, n.via...@libero.it wrote:
> Hi I have a question,
> as im not able to import a csv file whi
or of Statistics
Department of Statistics
Yale University
http://www.stat.yale.edu/~jay
(original message below)
--
Message: 128
Date: Sat, 27 Mar 2010 10:19:33 +0100
From: "n\.vial...@libero\.it"
To: "r-help"
Subject: [R] large dataset
Message-ID:
Content-
Am 27.03.2010 10:19, schrieb n.via...@libero.it:
> What im facing when i try to import the file is that R generates more than
> 100.000 records and is very slow...
> thanks a lot!!!
>
>
Maybe your physical memory is too limited. R uses this and if your data
are to large Linux and windows start
I do not know what is the limit for R. But on your problem you may try this:
- Install MySQL server (download somewhere on www.mysql.com)
- From inside MySQL you may import that CSV into a MySQL table
- Then using RMySQL or ROBDC you will choose the Fields to use and
import them to R.
Good luck
Ca
Hi I have a question,
as im not able to import a csv file which contains a big dataset(100.000
records) someone knows how many records R can handle without giving problems?
What im facing when i try to import the file is that R generates more than
100.000 records and is very slow...
thanks a lot!
If you need more aggregations on the stock (I assume that's what the
first column is), I'd use the data.table package. It allows fast
indexing and merge operations. That's handy if you have other features
of a stock (like company size or industry sector) that you'd like to
include in the aggregatio
Try this using the development version of read.zoo in zoo (which we
source from the R-Forge on the fly).
We use "NULL" in colClasses for those columns we don't need but in
col.names we still have to include dummy names for
them. Of what is left the index is the first three columns (1:3)
which we
Dear All,
I have a large data set that looks like this:
CVX 20070201 9 30 51 73.25 81400 0
CVX 20070201 9 30 51 73.25 100 0
CVX 20070201 9 30 51 73.25 100 0
CVX 20070201 9 30 51 73.25 300 0
First, I would like to import it by merging column 3 4 and 5, since that is
the timestamp. Then, I would l
e something like SAS
instead?
Hopefully someone else will chip in Edwin, best of luck.
Simon.
- Original Message - From: "Edwin Sendjaja"
To: "Simon Pickett"
Cc:
Sent: Tuesday, January 06, 2009 2:53 PM
Subject: Re: [R] Large Dataset
Hi Simon,
My RAM is only
Bellow, you can see the R data.
But this stucks even in first line (read.table..).
I dont know how to calculate this and write the result into a new table.
Edwin
data <- read.table("test.data")
data <- subset(data, (data$Zusatz!="60") & (data$Zusatz!="0"))
list(EndpointKeepAliveTim
Edwin Sendjaja"
To: "Simon Pickett"
Cc:
Sent: Tuesday, January 06, 2009 5:04 PM
Subject: Re: [R] Large Dataset
Hi Simons,
Is SAS more powerfull than R?
Well, I think I cannot afford to buy SAS.
actually, my computer isn't really slow. I think 4GB RAM is big enough
fo
For the mean, min, max and standard deviance (deviation I suppose) you
don't need to store all data in the memory, you can calculate them
incrementally. Read the file line by line (if it is a text file).
G.
On Tue, Jan 6, 2009 at 6:10 PM, Edwin Sendjaja wrote:
> Hi Ben,
>
> Using colClasses doen
The sqldf R package can import a file into an sqlite database and
extract a portion of it. You basically need two statements:
one to specify the name and format of the file and one to specify what
you want to extract. See home page at:
http://sqldf.googlecode.com
On Tue, Jan 6, 2009 at 12:10 PM,
Hi Ben,
Using colClasses doensnt improve the performace much.
With the data, I will calculate the mean, min, max, and standard deviance.
I have also failed to import the data in a Mysql Database. I dont have much
knowledge in Mysql.
Edwin
> Edwin Sendjaja web.de> writes:
> > Hi Simon,
> >
; Hopefully someone else will chip in Edwin, best of luck.
>
> Simon.
>
>
> - Original Message -
> From: "Edwin Sendjaja"
> To: "Simon Pickett"
> Cc:
> Sent: Tuesday, January 06, 2009 2:53 PM
> Subject: Re: [R] Large Dataset
>
> >
luck.
Simon.
- Original Message -
From: "Edwin Sendjaja"
To: "Simon Pickett"
Cc:
Sent: Tuesday, January 06, 2009 2:53 PM
Subject: Re: [R] Large Dataset
Hi Simon,
My RAM is only 3.2 GB (actually it should be 4 GB, but my Motherboard
doesnt
support it.
R use
Edwin Sendjaja web.de> writes:
>
> Hi Simon,
>
> My RAM is only 3.2 GB (actually it should be 4 GB, but my Motherboard doesnt
> support it.
>
> R use almost of all my RAM and half of my swap. I think memory.limit will not
> solve my problem. It seems that I need RAM.
>
> Unfortunately, I
dwin Sendjaja"
> To: "Simon Pickett"
> Cc:
> Sent: Tuesday, January 06, 2009 12:24 PM
> Subject: Re: [R] Large Dataset
>
> > Hi Simon,
> >
> > Thank for your reply.
> > I have read ?Memory but I dont understand how to use. I am not sure if
> >
Yup, it was a typo. But I always try capital if lower case doesnt work,
Sorry.
- Original Message -
From: "David Winsemius"
To: "Simon Pickett"
Cc: "Edwin Sendjaja" ;
Sent: Tuesday, January 06, 2009 12:40 PM
Subject: Re: [R] Large Dataset
When I
ay, January 06, 2009 12:24 PM
Subject: Re: [R] Large Dataset
Hi Simon,
Thank for your reply.
I have read ?Memory but I dont understand how to use. I am not sure if
that
can solve my problem. Can you tell me more detail?
Thanks,
Edwin
type
?memory
into R and that will explain what to do.
t;
> > ?memory
> >
> > into R and that will explain what to do...
> >
> > S
> > - Original Message - From: "Edwin Sendjaja"
> > To:
> > Sent: Tuesday, January 06, 2009 11:41 AM
> > Subject: [R] Large Dataset
> >
> >&
a a namespace (and not attached):
[1] tools_2.8.0
On Jan 6, 2009, at 6:43 AM, Simon Pickett wrote:
type
?memory
into R and that will explain what to do...
S
- Original Message - From: "Edwin Sendjaja"
To:
Sent: Tuesday, January 06, 2009 11:41 AM
Subject: [R] Large Dataset
age -
> From: "Edwin Sendjaja"
> To:
> Sent: Tuesday, January 06, 2009 11:41 AM
> Subject: [R] Large Dataset
>
> > Hi alI,
> >
> > I have a 3.1 GB Dataset ( with 11 coloumns and lots data in int and
> > string).
> > If I use read.table; i
type
?memory
into R and that will explain what to do...
S
- Original Message -
From: "Edwin Sendjaja"
To:
Sent: Tuesday, January 06, 2009 11:41 AM
Subject: [R] Large Dataset
Hi alI,
I have a 3.1 GB Dataset ( with 11 coloumns and lots data in int and
string).
Hi alI,
I have a 3.1 GB Dataset ( with 11 coloumns and lots data in int and string).
If I use read.table; it takes very long. It seems that my RAM is not big
enough (overload) I have 3.2 RAM and 7GB SWAP, 64 Bit Ubuntu.
Is there a best sultion to read a large data R? I have seen, that people
45 matches
Mail list logo