Hi Tim,

There are a few choices for files approaching or exceeding 100MB:

1) Break the large files into smaller chunks (per chrom, smaller if 
needed). Good for annotation, where the data is wide (example: an 
attribute for every base) but not deep. If you are using .wig files, 
this is the choice to try first.

2) Compressed the individual reads into consensus sequences, then load 
results. Good for very deeply sequenced regions since these will be 
unviewable in the browser anyway past a certain depth of coverage. 
Compression does not have to be to a depth == 1. Depending on the 
experiment, the goal could be to remove redundancy to make variation 
more visible or to create summaries that weed out variation.

3) Consider creating a minimum browser install locally. You will still 
need to do 1 or 2 or both for large datasets (we do, here at UCSC, as 
well), but the data will not expire the way that custom tracks do on the 
main UCSC Browser. Meaning, if your data covers the entire genome, 
reloading per-chrom every few days could become a bother. The Sessions 
feature can help.

Links:
http://genomewiki.ucsc.edu/index.php/Category:Mirror_Site_FAQ
http://genome.ucsc.edu/goldenPath/help/hgSessionHelp.html

We hope one of these works out for you,
Jennifer Jackson
UCSC Genome Bioinformatics Group

Tim Reddy wrote:
> Hello,
>
> I've been having a recurring problem which in the past had been manageable,
> but is getting worse with larger files arising from sequencing experiments.
> When uploading large files to the genome browser as custom tracks (i.e. wig
> files), I end up staring at a blank white screen, and the track appears to
> be not loaded at all. I'm assuming the browser times out on long uploads
> (~100MB, compressed), and do you know if there is an easy workaround for
> this?
>
> Thanks,
> Tim
>
>   
_______________________________________________
Genome maillist  -  [email protected]
https://lists.soe.ucsc.edu/mailman/listinfo/genome

Reply via email to