Dear Jim,
It works great. I appreciate your help.
Sincerely,
Alex
On 6/7/07, jim holtman <[EMAIL PROTECTED]> wrote:
>
> I took your data and duped the data line so I had 100,000 rows and it took
> 40 seconds to read in when specifying colClasses
>
> > system.time(x <- read.table('/tempxx.txt',
I took your data and duped the data line so I had 100,000 rows and it took
40 seconds to read in when specifying colClasses
> system.time(x <- read.table('/tempxx.txt',
header=TRUE,colClasses=c('factor', rep('numeric',49
user system elapsed
40.980.46 42.39
> str(x)
'data.frame':
Erm... Is that a typo? Are we really talking 23800 rows and 49 columns?
Because that doesn't seem that many
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of ssls sddd
Sent: 07 June 2007 10:48
To: r-help@stat.math.ethz.ch
Subject: Re: [R] How to l
Dear Jim,
Thanks a lot! The size of the text file is 189,588,541 bytes.
It consists of 238305 rows (including the header) and
50 columns (the first column is for ID and the rest for 49 samples).
The first row looks like:
"ID"
AIRNS_p_Sty5_Mapping250K_Sty_A09_50156.cel
AIRNS_p_Sty5_Mapping250K_St
x27;t seem that many
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of ssls sddd
> Sent: 07 June 2007 10:48
> To: r-help@stat.math.ethz.ch
> Subject: Re: [R] How to load a big txt file
>
> Dear Chung-hong Chan,
>
> Thanks!
Dear Chung-hong Chan,
Thanks! Can you recommend a text editor for splitting? I used UltraEdit
and TextPad but did not find they can split files.
Sincerely,
Alex
On 6/6/07, Chung-hong Chan <[EMAIL PROTECTED]> wrote:
>
> Easy solution will be split your big txt files by text editor.
>
> e.g. 5000
On Wed, 6 Jun 2007, Charles C. Berry wrote:
>
> Alex,
>
> See
>
> R Data Import/Export Version 2.5.0 (2007-04-23)
>
> search for 'large' or 'scan'.
>
> Usually, taking care with the arguments
>
> nlines, what, quote, comment.char
>
> should be enough to get scan() to cooperate.
>
> You
Alex,
See
R Data Import/Export Version 2.5.0 (2007-04-23)
search for 'large' or 'scan'.
Usually, taking care with the arguments
nlines, what, quote, comment.char
should be enough to get scan() to cooperate.
You will need around 1GB RAM to store the result, so if you are work
Easy solution will be split your big txt files by text editor.
e.g. 5000 rows each.
and then combine the dataframes together into one.
On 6/7/07, ssls sddd <[EMAIL PROTECTED]> wrote:
> Dear list,
>
> I need to read a big txt file (around 130Mb; 23800 rows and 49 columns)
> for downstream clust
Dear list,
I need to read a big txt file (around 130Mb; 23800 rows and 49 columns)
for downstream clustering analysis.
I first used "Tumor <- read.table("Tumor.txt",header = TRUE,sep = "\t")"
but it took a long time and failed. However, it had no problem if I just put
data of 3 columns.
Is there
10 matches
Mail list logo