Dear R helpers,

I created a somewhat big database (+206,700 rows) in MySQL and have
exported into a csv file, but I can't open the whole thing in R. I am
using:

> base<-read.csv("/path/to/file.csv", header=F, sep="," nrows=206720)

R doesn't complain but it only opens 128,328 observations (the number of
columns corresponds to the original database):

> dim(base)
[1] 128328    134

In case it's useful, my system's profile:

[~]$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 30
file size               (blocks, -f) unlimited
pending signals                 (-i) 31547
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 100
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

I also tried 
[~]$ R --max-vsize=500M


I haven't found any obvious way to expand the number of rows. Is this a
system wide issue, or is it R? Where should I look for a solution? Any
pointers greatly appreciated.

I'm using Fedora 12, 64bit, x86_64 on a 4Gb Ram laptop.
 
Thank you,
Alex

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to