: Sun, 11 Mar 2007 21:33:04 -0500
From: jim holtman [EMAIL PROTECTED]
Subject: Re: [R] read.table for a subset of data
To: Wensui Liu [EMAIL PROTECTED]
Cc: r-help r-help@stat.math.ethz.ch
Message-ID:
[EMAIL PROTECTED]
Content-Type: text/plain
If you know what 10 rows to read, then you can 'skip
limit on a variable's size. Version 4 removes this limitation; I'm
hopeful some day that an R package will be an interface to the NetCDF
version
4 library.
John Thaden
Message: 22
Date: Sun, 11 Mar 2007 21:33:04 -0500
From: jim holtman [EMAIL PROTECTED]
Subject: Re: [R] read.table
Hi R-experts,
I have data from four conditions of an experiment. I tried to create four
subsets of the data with read.table, for example,
read.table(Experiment.csv,subset=(condition==1))
. I found a similar post in the archive, but the answer to that post was
no. Any new ideas about reading
as far as I've know, I don't think you can do so with read.table. But
I am also thinking about RODBC and wondering if you could assign a DSN
to your .csv file and then use sql to fetch the subset.
On 3/11/07, gnv shqp [EMAIL PROTECTED] wrote:
Hi R-experts,
I have data from four conditions of
Why cann't you read in the whole data set and then create the subsets? This
is easily done with 'split'. If the data is too large, then consider a data
base.
On 3/11/07, gnv shqp [EMAIL PROTECTED] wrote:
Hi R-experts,
I have data from four conditions of an experiment. I tried to create
Jim,
Glad to see your reply.
Refering to your email, what if I just want to read 10 rows from a csv
table with 10 rows? Do you think it a waste of resource to read
the whole table in?
Anything thought?
wensui
On 3/11/07, jim holtman [EMAIL PROTECTED] wrote:
Why cann't you read in the
If you know what 10 rows to read, then you can 'skip' to them, but it the
system still has to read each line at a time.
I have a 200,000 line csv file of numerics that takes me 4 seconds to read
in with 'read.csv' using 'colClasses', so I would guess your 100K line file
would take half of that.