Hi Angie. Thank you very much for the pointer to bam files!. I was able to successfully view my data and it is exactly what I'm looking for.
Lee Received from Angie Hinrichs on 3/14/12 3:27 PM: > Hi Lee, > > If you have BAM from your Illumina sequencing run, and can put .bam and .bai > (index) files on a web server, then we can display the BAM as a custom track > by fetching only the parts of the file that are needed for display. > Instructions are here: > > http://genome.ucsc.edu/goldenPath/help/bam.html > > Since you have paired-end data, I recommend adding the "pairEndsByName=." > setting described there. > > If you have more questions, please send them to us at [email protected] . > > Angie > > ----- Original Message ----- >> From: "Lee Edsall"<[email protected]> >> To: "Hiram Clawson"<[email protected]> >> Cc: [email protected] >> Sent: Wednesday, March 14, 2012 2:38:36 PM >> Subject: Re: [Genome] Question about GFF file sizes >> Hiram, >> >> Thank you for the quick reply. >> >> The GFF file does have linked items. It's data from a paired-end >> Illumina sequencing run so ideally I'd like to see read1 associated with >> read2 (which will show me the whole DNA fragment). If I convert the >> data to a bed file, I lose that association. >> >> Alternatively, if it is a size issue, I can split the file into subfiles >> and load them separately. What's the maximum size the files can be? >> >> Thank you, >> Lee >> >> Received from Hiram Clawson on 3/14/12 2:19 PM: >>> Good Afternoon Lee: >>> >>> Is your GFF file actually linked items where multiple lines in the >>> GFF >>> file >>> have a common identifier to indicate the separate lines are part of >>> the same >>> feature ? Or is your GFF file merely a listing of separate items ? >>> >>> If your GFF file is simply separate items, you can turn it into a >>> bed >>> file >>> by selecting out a couple of the columns. For example: >>> >>> $ awk '{print $1,$4,$5,$2,0,$7}' yourData.gff> yourData.bed >>> >>> Assuming column 2 is a meaningful name. If this is a large bed >>> file, use the bedToBigBed converter and use the resulting big bed >>> file at a URL for your custom track. >>> >>> --Hiram >>> >>> Lee Edsall wrote: >>>> I have been trying to upload a GFF as a custom track and been >>>> encountering errors. The errors are: >>>> >>>> Example 1: >>>> Can't start query: >>>> select genome from dbDb where name = 'hg19' >>>> mySQL error 2008: MySQL client ran out of memory >>>> >>>> Example 2: >>>> Couldn't connect to database hgcentral on genome-centdb as >>>> hgcentuser. >>>> MySQL client ran out of memory >>>> >>>> I get the errors regardless of whether I link to a website (I've >>>> tried 2 different ones) or upload from my computer. >>>> >>>> Perhaps the file I am trying to upload is too large? It is 1.1mb >>>> gzipped. >>>> >>>> Any suggestions would be appreciated. >>>> >>>> Thank you, >>>> Lee Edsall >>>> _______________________________________________ >>>> Genome maillist - [email protected] >>>> https://lists.soe.ucsc.edu/mailman/listinfo/genome >>>> >>> >> _______________________________________________ >> Genome maillist - [email protected] >> https://lists.soe.ucsc.edu/mailman/listinfo/genome _______________________________________________ Genome maillist - [email protected] https://lists.soe.ucsc.edu/mailman/listinfo/genome
