Great,
works perfect!!!
Thanks a lot
Maxim
2010/1/30 baptiste auguie
> Hi again,
>
> Below are two versions, depending on whether you want to use scan or
> read.table,
>
> ## with scan
> library(reshape)
> listOfFiles <- list.files()
> d <- llply(listOfFiles, scan)
> names(d) <- basename(l
Hi again,
Below are two versions, depending on whether you want to use scan or read.table,
## with scan
library(reshape)
listOfFiles <- list.files()
d <- llply(listOfFiles, scan)
names(d) <- basename(listOfFiles)
melt(d)
## with read.table
listOfFiles <- list.files()
names(listOfFiles) <- base
Hi,
my data is really not spectacular, each of the 6 files (later several
hundred) contains correlation coefficients in plain text format like:
0.923960073
0.923960073
0.612571344
0.064183275
0.007733399
-0.315444372
-0.064591277
-0.268336142
...
with between 1000-13000 rows.
Scanning f
Hi,
Hadley recently proposed a strategy using plyr for a very similar problem,
listOfFiles <- list.files()
names(listOfFiles) <- basename(listOfFiles)
library(plyr)
d <- ldply(listOfFiles, scan)
Even if you don't want to use plyr, it's always better to group things
in a list rather than clutter
Hi,
I have many files containing one column of data. I like to use the scan
function to parse the data. Next I like to bind to a large vector.
I try this like:
count<-1
files <- list.files() # all files in the working directory
for(i in files) {
tmp <- scan(i)
assign(files[count],
5 matches
Mail list logo