Precursor charge is already in mzXML. If you look at the mzXML files created
by ReAdW (msconvert should also have it), you should see lines such as:
1390.82
Those lines are part of the msms scan information (ms level = 2). If the
line isn't there, you might want to check the method on LTQ to make s
David, just a quick reply to part of your message. Normally, I make a
directory for an experiment and I will process the mascot, sequest,
and possibly X!Tandem data from each mzXML file in the same directory. I do
append the name of the search to the TPP files, so I can determine which
search engin
s symbolic
> links starting with Vista:
> http://en.wikipedia.org/wiki/NTFS_symbolic_link
>
> -Matt
>
> Greg Bowersock wrote:
> > if you are using windows, you won't be able to use the soft-links, so
> > that will require some changes also.
>
> >
>
--~--
This is the perl script that I use to process multiple sequest files. I have
a different one that uses iProphet that I am still working out some
parameters, but this one I've been using for years. It takes the following
commandline arguments: a text file with the mzXML files to process (1 per
line)
Not really. Scaffold has nothing to do with the TPP, and there isn't a need
to convert data for Scaffold.
Greg
On Sat, Oct 3, 2009 at 9:01 PM, dick wrote:
>
> Hi,
>
> I hope to be able to convert data from my LCQ deca that I can look at
> using Scaffold. I got TPP to run on my laptop with the
It looks like you aren't giving Out2XML a directory to work on. Try
something like:
tpp/bin/Out2XML -pI
-P/IMSB/results/workflow/324/sorcerer/output/10300/original/sequest.params
/IMSB/results/workflow/324/sorcerer/output/10300/original/O08-10093_c
You shouldn't need the -E option, since Out2XML w
through the sequest pipeline... and I
> haven't done that enough to really assess how much extra we get vs
> Mascot/Tandem/OMSSA.
>
> DT
>
> Greg Bowersock wrote:
> > I've pretty much done the same thing also, but I wasn't planning on
> > releasing the
I've pretty much done the same thing also, but I wasn't planning on
releasing the code, as I've been lazy and hardcoded most of the information.
That and I have built my scripts around an Oracle database, which isn't
freely available, so it would take considerable coding to make it more
generic. I
Sequest is not very fast. The only way to really speed up sequest is to give
it more processors, provided you are using sequest in a way that will allow
you to use the extra processors. There are many factors that affect the
speed of processing though, with the two largest being the type of digesti
hetParser exp1/sequest/interact.pep.xml
> exp1/xtandem/interact.pep.xml exp1/mascot/interact.pep.xml
> exp2/sequest/interact.pep.xml exp2/xtandem/interact.pep.xml
> exp2/mascot/interact.pep.xml interact.iproph.pep.xml
> >
> > To run ProteinProphet on the iProphet results you can u
Just bumping this one up, since it looks like it got missed.
On Jun 10, 12:08 pm, "bowers...@gmail.com"
wrote:
> I was wondering if someone could post some more detail on iProphet.
> One thing that I would like to know is what all command line arguments
> it takes, maybe some sample command line
I haven't tried it in a long time, but if you call ProteinProphet manually,
you can add the flags: EXCELPEPS EXCEL0. That excel file used to be a little
different than the one generated by the web pages, but that might do it for
you.
Greg
On Thu, Jun 18, 2009 at 8:42 PM, Luis Mendoza
wrote:
> He
Ok, that's what I thought the problem may be. I forgot to check that when I
switched servers a couple of months ago. My old server has 4.2, but the new
one only has 4.0.
Thanks. I didn't even think about that since I remembered that it used to
work.
Greg
On Mon, May 18, 2009 at 1:07 PM, Jimmy E
You'll need to modify the scripts a little, since my TPP directory
(/home/TPP/tpp) is hardcoded in the scripts, along with a couple of other
minor things. On the protein script, depending on what all you want
displayed you'll need to modify things there also. Both of the scripts below
will print ou
Are you looking for the output from Peptide Prophet or Protein Prophet? They
can both be done via command line, but they are done different ways.
Greg
On Wed, May 13, 2009 at 1:43 PM, Andreas Quandt wrote:
>
> dear list,
>
> does anyone know how to convert a pepXML to a text file with comma-
> d
m::error_code&)'
> collect2: ld returned 1 exit status
> make[3]: *** [/usr/local/src/trans_proteomic_pipeline/src/../build/
> linux/tandem.exe] Error 1
> make[3]: Leaving directory `/usr/local/src/trans_proteomic_pipeline/
> extern/xtandem/src'
> make[2]: *** [defau
The error is actually related to Boost. What you need to do is add the lib
directory with the boost shared object files into the ld.so.conf (or place a
configuration file in /etc/ld.so.conf.d/), and then reload the ldconfig and
it should work fine. I had the same issue, since I recently did a fresh
You can do it with lwp-request. That is the easiest way that I have found to
do it. The easiest way to do it is to set all of the options you want for
your file and then export the file. Copy the link that generated the file,
and then put that in a script. Then just put in a variable for your filen
Thanks David, that is exactly the information I needed to get started. I
hadn't dug into the protXML file, since I didn't think that information
would be in there. Now I can crank out the script and be good to go.
Thanks,
Greg
On Mon, Mar 2, 2009 at 4:13 PM, David Shteynberg <
dshteynb...@systems
some differences specific
> to your system. Does this make sense for what you're seeing?
>
> -Natalie
>
>
> On Fri, Feb 20, 2009 at 12:12 PM, Greg Bowersock
> wrote:
> > I found a few more issues, since I didn't do the make install this
> morning.
> &
l
CGI/show_tmp_pngfile.pl
Greg
On Fri, Feb 20, 2009 at 9:24 AM, Greg Bowersock wrote:
> Thanks, that fixed the fPIC issue, but then it hit another issue:
> Can't load
> '/home/TPP/tpp/trans_proteomic_pipeline/build/linux/tpplib_perl.so' for
> module tpplib_perl: libboost_fi
Actually, your sequest license is for 1 PC only. If you are using the web
based sequest, you can modify the configuration to use mulitple processors
to speed up your processing. Sequest is quite abit slower than X!Tandem and
Mascot though, so don't expect it to be fast. If you are doing your searc
Thanks, that fixed the fPIC issue, but then it hit another issue:
Can't load
'/home/TPP/tpp/trans_proteomic_pipeline/build/linux/tpplib_perl.so' for
module tpplib_perl: libboost_filesystem-gcc41-mt-1_37.so.1.37.0: cannot open
shared object file: No such file or directory at
/usr/lib64/perl5/5.8.8/x
We use Genedata, but we don't use it instead of the TPP, since they aren't
the same thing. I have a few scripts that I use to merge the data from both
programs together, which helps with some analysis. However, programs like
Expressionist, Decyder MS, and other spectral processing software will
alw
If you download the Insilicos Viewer from:
http://www.insilicos.com/viewer_download.html you can look through your
mzXML files.
On Mon, Jan 12, 2009 at 9:10 PM, Jason Held wrote:
>
> MSight is only seeing level 2. Is there a way I can look at the mzMXL
> file to determine if it contains both lev
You can find out if that is the case or not by running
C:\Inetpub\cgi-bin\fastaidx.pl to create the .idx files if you have the web
interface to sequest installed (or you can also generate the idx files from
the web interface). That script calls some functions in
C:\Inetpub\etc\fastaidx_lib.pl, whic
What you can do is convert the raw file to mzXML, and then use MzXML2Search
to generate the dta files. It defaults to only exporting MSMS scans, which
is what you are trying to do. I've done this to search only MS3 data with
some of the methods that we use.
Greg
On Thu, Sep 25, 2008 at 5:46 AM, S
committed my code to SVN. Please try it out and let me know of any other
> problems you encounter.
>
> Thanks,
> -David
>
>
> On Wed, Sep 24, 2008 at 2:18 PM, Greg Bowersock <[EMAIL PROTECTED]>wrote:
>
>> I already posted the pep.xml file after InteractParser.
post the problem datasets to our FTP
> site and send me the filename?
>
> -David
>
>
> On Wed, Sep 24, 2008 at 1:00 PM, Greg Bowersock <[EMAIL PROTECTED]>wrote:
>
>> Sorry it was PeptideProphetParser that had the loop. InteractParser was
>> the last step that worke
Sorry it was PeptideProphetParser that had the loop. InteractParser was the
last step that worked before the failure, so it got stuck in my head. Sorry
about the confusion.
Greg
On Wed, Sep 24, 2008 at 2:29 PM, David Shteynberg <
[EMAIL PROTECTED]> wrote:
> Hi Greg,
>
> InteractParser is not an it
I haven't seen that behavior either. I'd suggest removing all of the
binaries that you created, wiping out your install directory and trying to
compile again. It almost seems like the path was wrong for an initial
attempt and the files aren't being overwritten (or recreated) with the new
compile. Y
.
>
> A quick fix, of course, is to simply edit the offending fasta file to
> remove the
> tag-like characters from the annotation. Just today I had to do that due
> to an
> unrelated bug in a different tool.
>
> Hope this helps,
> --Luis
>
>
>
> On Fri, Sep 1
3:46 PM, Jimmy Eng <[EMAIL PROTECTED]> wrote:
>
> Interesting ... looks like the tag-like text in the protein description
> are being replicated as individual elements in the pep.xml. Hopefully a
> developer who knows Mascot2XML can implement a quick fix.
>
> Greg Bowers
I usually checkout the svn version, so I'm not sure offhand if the
production version had the bug, but I can check that in a few minutes once I
grab that version. The error looks like it is caused by Mascot2XML. I almost
missed it the first time, but there are beta and psi fields in the protein
des
34 matches
Mail list logo