Hello,
Our Statistics Group is evaluating the use of R for the elaboration of some index.
We have some datasets sas (120 Mb) and we would like to evaluate performance in the elaborations of mean, percentile, Gini index of a population and of a survey sample.
I need to open "a dataset". Currently I've understood that I've to follow a code sequence like this:


alfa <- {a moltiplicator}
memory.limit(alfa*round(memory.limit()/1048576.0, 2))
library(foreign)
hereis <- read.xport("C:/R/ { my exported file sas }")

The dimension of { my exported file sas } is 120 mega
Is correct to allocate all the file in memory in a variable ( hereis ) ?
With an alfa ( the moltiplicator ) of  2 , I have the following errors:
   Error: cannot allocate vector of size 214 Kb
   In addition: Warning message:
   Reached total allocation of 446Mb: see help(memory.size)
How can I solve this problem?
Is R-language able to manage data of 100-150 Mb ? And in which conditions?

I'm looking for informations about the use of R and some specific problems.
I'm looking for example (Code of) of complex program in R.
The program I must build could be described by the following steps:
1) open a dataset sas of 120 mega
2) merge it with a weighting universe little dataset
3) calculate a survey index
4) store it in a file
I'm looking also for some developers/users in R language.

Thank you for any advice.
Yours faithfully

Diego Moretti

--
============================================================
Diego Moretti                           ([EMAIL PROTECTED])
Italian National Statistical Institute  (ISTAT)

______________________________________________
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to