Hello,

I am experiencing a problem regarding the load in memory of bed files of
30 GB. my function read.table unleash the error : Error in unique(x) :
length xxxxxx is too large for hashing.

this is generated by the function MKsetup of the unique.c file. Even by
increasing by 10 000x the value, the error persists. I believe the
function pushes more data in ram, but I am not sure this is the good way
to focus on. 

Ultimately, I would like to produce a GenomeData object from either a
BAM file or a bed file.

has someone ever worked with very very big BAM files (about 30 GB)

thanks

Rene paradis

_______________________________________________
Bioc-sig-sequencing mailing list
Bioc-sig-sequencing@r-project.org
https://stat.ethz.ch/mailman/listinfo/bioc-sig-sequencing

Reply via email to