One correction. I had hard coded the last statement for testing
with the data provided. Change it to this for generality:
result <- array(nums, c(nr, nc, n), c(NULL, NULL, L[breaks]))
On 12/22/05, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
> One way to do this is to use read.fwf. I have b
One way to do this is to use read.fwf. I have borrowed Jim's
use of scan and use a similar calculation to get the indexes
of the breaks, breaks. We then determine the common number
of rows and columns in each species.
The second group of statements replaces all 9's with spaces
so that upon parsi
Here's one possibility, if you know the number of species and the numbers of
rows and columns before hand, and the dimension for all species are the
same.
readSpeciesMap <- function(fname, nspecies, nr, nc) {
spcnames <- character(nspecies)
spcdata <- array(0, c(nc, nr, nspecies))
## o
Here is a way of reading the data into a 'list'. You can convert the list
to any array of the proper dimensions.
> input <- scan('/tempxx.txt.r', what='')
Read 21 items
> input
[1] "SPECIES1" "999001099" "900110109" "011101000" "901100101" "110100019"
[7] "901110019" "SPECIES2" "99999" "9
Hi,
I'm needing some help finding a function to read a large text file into an
array in R. The data are essentially presence / absence / na data for many
species and come as a grid with each species name (after two spaces) at the
beginning of the matrix defining the map for that species. An exc