first problem: the "blank" space in first position in the first line. Try
removing it, so that the file looks like this:
"1","2","3","4"
"1",484,43,67,54
"2",54,35,67,34
"3",69,76,78,55
"4",67,86,44,34
Second: your colnames and rownames are numeric; R recognizes it but puts an
X (but it recognize
Good morning,
Good morning,
I'm trying to read the file into an array, with the following code.
A<- as.matrix(read.csv("~/Desktop/Results/Cfile.csv", header = FALSE,
sep=","))
The content of the file
" ","1","2","3","4"
"1", 484,43,67,54
"2",54,35,67,34
"3",69,76,78,55
"4",67,86,44,34
Wh
On Thu, 16 Jun 2011, Joel wrote:
Found what was wrong with my code
Please follow the posting guide, and give the context asked for there.
changed:
inFile <- file('gwas_data_chr10.gen')
to:
inFile <- file('gwas_data_chr10.gen','r')
Dunno why its important but it is :P
You didn't open the f
Found what was wrong with my code
changed:
inFile <- file('gwas_data_chr10.gen')
to:
inFile <- file('gwas_data_chr10.gen','r')
Dunno why its important but it is :P
--
View this message in context:
http://r.789695.n4.nabble.com/Read-file-line-by-line-tp3602349p3602399.html
Sent from the R help m
Hi
I got a file that looks like this: (i have shorten it alot the real file is
over 200k rows long)
chr10 rs7909677 101955 A G 0 1 0 1 0 0...
chr10 rs2436025 238506 C G 1 0 0 1 0 0...
chr10 rs11253562 148946 C T 0 1 0 0 1 ...
chr10 rs1105116 230788 G T 0 0 1 0 0 1...
chr10 rs4881551 149076 A G 0
On Sun, Mar 20, 2011 at 3:47 PM, algotr8der wrote:
> Hello folks - I have been trying to figure this out. I have a set of very
> large files that are of this format
>
> , , , ,
> 1/4/1999,9:31:00 AM,blah, blah, blah
> 1/4/1999,9:32:00 AM,blah, blah, blah
> 1/4/1999,9:33:00 AM,blah, blah, blah
>
>
THanks Jim. Eventually I do want to store the records in a database... so
mySQL. But right now I want to run some analytics on the data so I'm looking
for a quick and dirty solution that can give me the flexibility to extract
data on various time periods. I think a perl script that uses Text:CSV_XS
Depends on what version of R you are using. If you are running a 32
bit version and if all the columns were numeric, if you had about 20
columns, I would guess that might require 300MB for a single copy of
the object and for the reading in and then subsetting, you might
require 3-4X that space. S
Thanks Jim for the reply. The file has 1,183,318 rows and there are 20 such
files.
Too big for R to handle?
--
View this message in context:
http://r.789695.n4.nabble.com/read-file-part-way-through-based-on-start-and-end-date-first-column-tp3391769p3392005.html
Sent from the R help mailing lis
How big is the file? WHy not read the entire file in and then use
'subset' to extract only the data that you want? If the file is too
large to be able to read in, then you could put it in a database and
use SQL to extract what you want. You could also create a 'perl'
script to filter the data be
Hello folks - I have been trying to figure this out. I have a set of very
large files that are of this format
, , , ,
1/4/1999,9:31:00 AM,blah, blah, blah
1/4/1999,9:32:00 AM,blah, blah, blah
1/4/1999,9:33:00 AM,blah, blah, blah
I want to write R code that reads only that data between a start an
Dear Gabor and Jim
You both gave me amazing solutions.
I will use.
Thanks!
Nilza
On Tue, Oct 5, 2010 at 2:11 AM, Gabor Grothendieck
wrote:
> On Sat, Oct 2, 2010 at 11:31 PM, Nilza BARROS
> wrote:
> > Dear R-users,
> >
> > I would like to know how could I read a file with different lines
> length
On Sat, Oct 2, 2010 at 11:31 PM, Nilza BARROS wrote:
> Dear R-users,
>
> I would like to know how could I read a file with different lines lengths.
> I need read this file and create an output to feed my database.
> So after reading I'll need create an output like this
>
> "INSERT INTO TEMP (DATA,
Is this what you are looking for:
> input <- readLines(textConnection(" 2010 10 01 00
+ *82599 -35.25 -5.91 52 1*
+ 1008.0 -115 3.1 298.6 294.6 64
+ 2010 10 01 00
+ *83649 -40.28 -20.26 4 7
+ *1011.0 - 0 0.0 298.4 296.1 64
+ 1000.0 96 40
Sorry, guys
I couldn`t explain what I really wanted.
I have a file with many station and many information for each one.
I need identified the line where the station information start. After that
I`d like to store that data (related to the station) so as to it could be
work in separate way.
If I w
Hi Nilza,
Just to add to David's comments, if you are reading in your file with
read.table(..., fill=TRUE), and assuming that you haven't yet replace
- with NA, you don't need grep. You can just use the number of NAs
in each line to locate data blocks.
Date records have 3 NAs
Location records
On Oct 3, 2010, at 9:40 PM, Nilza BARROS wrote:
Hi, Michael
Thank you for your help. I have already done what you said.
But I am still facing problems to deal with my data.
I need to split the data according to station..
I was able to identify where the station information start using:
my.da
Hi, Michael
Thank you for your help. I have already done what you said.
But I am still facing problems to deal with my data.
I need to split the data according to station..
I was able to identify where the station information start using:
my.data<-file("d2010100100.txt",open="rt")
indata <- read
Hello Nilza,
If your file is small you can read it into a character vector like this:
indata <- readLines("foo.dat")
If your file is very big you can read it in batches like this...
MAXRECS <- 1000 # for example
fcon <- file("foo.dat", open="r")
indata <- readLines(fcon, n=MAXRECS)
The number
Dear R-users,
I would like to know how could I read a file with different lines lengths.
I need read this file and create an output to feed my database.
So after reading I'll need create an output like this
"INSERT INTO TEMP (DATA,STATION,VAR1,VAR2) VALUES (20100910,837460, 39,390)"
I mean, eac
20 matches
Mail list logo