Re: [spctools-discuss] why I got this error information?

2013-05-03 Thread Jimmy Eng
Eileen,

Any chance you have a + in your file name?  If yes, that is your problem.
 And based on your error message you pasted, this looks like the case.  If
you re-named your pep.xml and mzML files, you will also have to change all
references *inside* the pep.xml file to the updated name which I mentioned
in my previous reply.

To avoid all these issues with mis-naming files and not correctly updating
file names  content inside the pep.xml files, consider starting from the
beginning with a re-named mzML file, do the search to generate a pep.xml
file and then run the TPP tools.

- Jimmy


On Fri, May 3, 2013 at 3:49 PM, Eileen Yue y...@ohsu.edu wrote:

 Hi Jimmy,
 thank you for your advice. I followed your advice and test the files one
 by one and figured out one raw file name is different from the sequest
 search file name (my colleague renamed the sequest search result file
 name). After I renamed the raw file name to match the sequest results name.
 then the analyze peptide went through without any problem.

 The new problem is: when I click the peptide to check the xpress ratio
 infomration or ms/ms informatin, there is error information showed below.
 Could you give me some advice for this error information?

 Thank you so much and have a nice weekend
 Eileen
 
 for ms/ms information erro:

 command c:\Inetpub\wwwroot\..\tpp-bin\tar.exe -tzf
 c:/Inetpub/wwwroot/ISB/data/MAY1335_Phosphopeptides/MAY1335_Yeast_1
 4-D_100ug_PierceIMAC_75um_140_20130423_VE2.tgz *MAY1335_Yeast_1
 4-D_100ug_PierceIMAC_75um_140_20130423_VE2.22310.22310.3.dta  /dev/null
 failed: Operation not permitted Error - cannot read spectrum; tried direct
 .dta, from mzXML/mzData and from .tgz


 for xpress information error:

 Light scans:   mass:  tol:
 Heavy scans: mass:   Z:

  Raw file:
 .out file:   Norm:

 Error - cannot open file
 c:/Inetpub/wwwroot/ISB/data/MAY1335_Phosphopeptides/MAY1335_Yeast_1.




 


 From: spctools-discuss@googlegroups.com [spctools-discuss@googlegroups.com]
 On Behalf Of Jimmy Eng [jke...@gmail.com]
 Sent: Thursday, May 02, 2013 9:16 AM
 To: spctools-discuss@googlegroups.com
 Subject: Re: [spctools-discuss] why I got this error information?

 Eileen,

 The + in the file names is not a problem at this stage of processing
 although it is a reserved character for a URL so you will have issues
 visualizing results in the web interface.  The - character is OK.  For
 future reference, it's not enough to simply change the pep.xml and mzML
 file names.  You will have to also make the corresponding changes to the
 content within the pep.xml file, specifically the base_name attributes.
  I always suggest that researchers stick to numbers, letters, dash -, and
 underscore _ in naming files.

 For your specific error, does the following file exist?

  
 c:/Inetpub/wwwroot/ISB/data/MAY1335_Phosphopeptides/MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2.mzML
 If yes, I couldn't tell you why XPRESS is reporting that error.  I would
 suggest you try this analysis with just a single input file first to
 confirm you can run the tools to completion.  If that works, try this
 processing this one
 MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2.mzML file by itself.

 - Jimmy


 On Wed, May 1, 2013 at 1:27 PM, Eileen Yue y...@ohsu.edumailto:
 y...@ohsu.edu wrote:

 Hello everyone.
 I have one error information when I tried to analyze peptide for SILAC
 data. I do not know whether this is possibly caused by file's name which
 includes + and - . I did manually change the pep.xml and mzml files's
 name (which I deleted + and - ) but I still get error information.
 Could anyone give me some advice about this issue?

 Thanks
 Eileen



 XPRESS error - cannot open file from basename
 c:/Inetpub/wwwroot/ISB/data/MAY1335_Phosphopeptides/MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2,
 will try to derive from scan names

 command C:/Inetpub/tpp-bin/XPressPeptideParser interact.pep.xml -m1
 -nK,8 -nR,10 -c6 -p1 failed: Unknown error

 command C:/Inetpub/tpp-bin/XPressPeptideParser interact.pep.xml -m1
 -nK,8 -nR,10 -c6 -p1 exited with non-zero exit code: -1073741819
 QUIT - the job is incomplete

 command c:\Inetpub\tpp-bin\xinteract -Ninteract.pep.xml -p0.75 -l6 -Op
 -dREV -X-m1-nK,8-nR,10-c6-p1 -A-lKR-r0.5
 MAY1335_Yeast_1+4-D_100ug_IMAC_75um_140_20130410_VE2.pep.xml
 MAY1335_Yeast_1+4-D_100ug_POLY-Ti_75um_140_20130410_VE2.pep.xml
 MAY1335_Yeast_1+4-D_100ug_PierceIMAC_75um_140_20130423_VE2.pep.xml
 MAY1335_Yeast_2+3-D_100ug_IMAC_75um_140_20130410_VE2.pep.xml
 MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2.pep.xml
 MAY1335_Yeast_2+3-D_100ug_POLY-Ti_75um_140_20130410_VE2.pep.xml failed:
 Unknown error

 Command FAILED


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe

Re: [spctools-discuss] why I got this error information?

2013-05-02 Thread Jimmy Eng
Eileen,

The + in the file names is not a problem at this stage of processing
although it is a reserved character for a URL so you will have issues
visualizing results in the web interface.  The - character is OK.  For
future reference, it's not enough to simply change the pep.xml and mzML
file names.  You will have to also make the corresponding changes to the
content within the pep.xml file, specifically the base_name attributes.
 I always suggest that researchers stick to numbers, letters, dash -, and
underscore _ in naming files.

For your specific error, does the following file exist?
 c:/Inetpub/wwwroot/ISB/data/MAY1335_Phosphopeptides/
MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2.mzML
If yes, I couldn't tell you why XPRESS is reporting that error.  I would
suggest you try this analysis with just a single input file first to
confirm you can run the tools to completion.  If that works, try this
processing this one MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2.mzML
file by itself.

- Jimmy


On Wed, May 1, 2013 at 1:27 PM, Eileen Yue y...@ohsu.edu wrote:


 Hello everyone.
 I have one error information when I tried to analyze peptide for SILAC
 data. I do not know whether this is possibly caused by file's name which
 includes + and - . I did manually change the pep.xml and mzml files's
 name (which I deleted + and - ) but I still get error information.
 Could anyone give me some advice about this issue?

 Thanks
 Eileen



 XPRESS error - cannot open file from basename
 c:/Inetpub/wwwroot/ISB/data/MAY1335_Phosphopeptides/MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2,
 will try to derive from scan names

 command C:/Inetpub/tpp-bin/XPressPeptideParser interact.pep.xml -m1
 -nK,8 -nR,10 -c6 -p1 failed: Unknown error

 command C:/Inetpub/tpp-bin/XPressPeptideParser interact.pep.xml -m1
 -nK,8 -nR,10 -c6 -p1 exited with non-zero exit code: -1073741819
 QUIT - the job is incomplete

 command c:\Inetpub\tpp-bin\xinteract -Ninteract.pep.xml -p0.75 -l6 -Op
 -dREV -X-m1-nK,8-nR,10-c6-p1 -A-lKR-r0.5
 MAY1335_Yeast_1+4-D_100ug_IMAC_75um_140_20130410_VE2.pep.xml
 MAY1335_Yeast_1+4-D_100ug_POLY-Ti_75um_140_20130410_VE2.pep.xml
 MAY1335_Yeast_1+4-D_100ug_PierceIMAC_75um_140_20130423_VE2.pep.xml
 MAY1335_Yeast_2+3-D_100ug_IMAC_75um_140_20130410_VE2.pep.xml
 MAY1335_Yeast_2+3-D_100ug_PIMAC_75um_140_20130423_VE2.pep.xml
 MAY1335_Yeast_2+3-D_100ug_POLY-Ti_75um_140_20130410_VE2.pep.xml failed:
 Unknown error

 Command FAILED


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Re: TPP 4.6.2 installation problem Ubuntu 12.04 LTS virtual box

2013-04-22 Thread Jimmy Eng
Liam,

I have to ask this basic question ... can you confirm you have
/etc/apache2/sites-enabled/tpp-4.6.2?  Either as a direct copy of
/etc/apache2/sites-available/tpp-4.6.2 or preferably as a symlink to
/etc/apache2/sites-available/tpp-4.6.2?

What does your apache error log indicate when you try to open
http://localhost/tpp/cgi-bin/tpp_gui.pl?



On Mon, Apr 22, 2013 at 2:57 AM, Liam Bell bell.l...@gmail.com wrote:

 Found the solution at http://forum.linode.com/viewtopic.php?t=7873, the
 second post by Piki sorted out the error whilst trying to enable to
 a2ensite command, however the site still does not load.

 As far as I can tell my /etc/apache2/apache2.conf file is setup correctly,
 my /etc/apache2/sites-available/tpp-4.6.2 file is setup, my /var/www/
 directory and my /usr/local/tpp/ directory is setup...I'm just not
 following what has gone wrong...?




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Re: TPP 4.6.2 installation problem Ubuntu 12.04 LTS virtual box

2013-04-21 Thread Jimmy Eng
Liam,

At this point, perl is not the issue for you; our problem is definitely
with your apache config.  The WEBSERVER_ROOT environment variable should
point to your webserver's document root i.e. the directory that contains
the index.html file that says It Works!.  Is it possibly /var/www/html?
 I can't imagine it being really /usr/local/ as you have in your apache2
config file for TPP below.  One step at a time so fix that first, restart
apache, and try and to access http://localhost/tpp/cgi-bin/tpp_gui.pl.  If
the page doesn't open, what is the error message in your apache error log
file?

# directory for tpp's executable files
ScriptAlias /tpp/cgi-bin /usr/local/tpp/cgi-bin
Directory /usr/local/tpp/cgi-bin
#AllowOverride AuthConfig Limit
AllowOverride All
Options Indexes +FollowSymLinks MultiViews ExecCGI +Includes
AddHandler default-handler .jpg .png .css .ico .gif
AddHandler cgi-script .cgi .pl
Order allow,deny
Allow from all
SetEnv WEBSERVER_ROOT /usr/local/
*Is it possible the above line should read /usr/local/tpp/ ???*
#SetEnv WEBSERVER_ROOT /var/www
/Directory

- Jimmy


On Sun, Apr 21, 2013 at 10:44 AM, Liam Bell bell.l...@gmail.com wrote:

 Nope - no dice...

 Changed the PERL_LIB_CORE= /usr/lib/perl/15.4.2/CORE in the
 makefile.conf.incl file before re-running the compiling/building.

 Ran through the rest fo the steps from
 http://tools.proteomecenter.org/wiki/index.php?title=TPP_4.5.2:_Installing_on_Ubuntu_10.04.3
  as
 I have done successfully before.

 The server is working still (localhost = It works!) but I can't load up
 http://localhost/tpp/cgi-bin/tpp_gui.pl - still getting The requested
 URL /tpp/cgi-bin/tpp_gui.pl was not found on this server.

 Any other ideas?


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Re: TPP 4.6.2 installation problem Ubuntu 12.04 LTS virtual box

2013-04-21 Thread Jimmy Eng
Liam.

Your ServerRoot should probably still be /etc/apache2.  ServerRoot is the
root of the directory tree where your apache config files are kept.  What I
was referring to was your DocumentRoot which is what your WEBSERVER_ROOT
environment variable should be set to.  This DocumentRoot is the root
directory where you serve your documents (i.e. index.html).


On Sun, Apr 21, 2013 at 12:04 PM, Liam Bell bell.l...@gmail.com wrote:

 Hi Jimmy,

 I looked up the WEBSERVER_ROOT environment variable in my
 /etc/apache2/apche2.conf file.

 I think the relevant section is this:
 # Global configuration
 #

 #
 # ServerRoot: The top of the directory tree under which the server's
 # configuration, error, and log files are kept.
 #
 # NOTE!  If you intend to place this on an NFS (or otherwise network)
 # mounted filesystem then please read the LockFile documentation (available
 # at URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile);
 # you will save yourself a lot of trouble.
 #
 # Do NOT add a slash at the end of the directory path.
 #
 #ServerRoot /etc/apache2

 To which I edited so that,
 ServerRoot /var/www

 The index.html file you mentioned is located at /var/www/index.html

 I then restarted apache and got the following:
 liam@liam-Y500:~$ sudo /etc/init.d/apache2 restart
 [sudo] password for liam:
 apache2: Syntax error on line 240 of /etc/apache2/apache2.conf: Include
 directory '/var/www/mods-enabled' not found
 Action 'configtest' failed.
 The Apache error log may have more information.
...fail!

 What is going wrong? If I go to localhost in a web browser I still get It
 works!.


 On 21 April 2013 20:48, Jimmy Eng jke...@gmail.com wrote:

 Liam,

 At this point, perl is not the issue for you; our problem is definitely
 with your apache config.  The WEBSERVER_ROOT environment variable should
 point to your webserver's document root i.e. the directory that contains
 the index.html file that says It Works!.  Is it possibly /var/www/html?
  I can't imagine it being really /usr/local/ as you have in your apache2
 config file for TPP below.  One step at a time so fix that first, restart
 apache, and try and to access http://localhost/tpp/cgi-bin/tpp_gui.pl.
  If the page doesn't open, what is the error message in your apache error
 log file?

 # directory for tpp's executable files
 ScriptAlias /tpp/cgi-bin /usr/local/tpp/cgi-bin
 Directory /usr/local/tpp/cgi-bin
 #AllowOverride AuthConfig Limit
 AllowOverride All
 Options Indexes +FollowSymLinks MultiViews ExecCGI
 +Includes
 AddHandler default-handler .jpg .png .css .ico .gif
 AddHandler cgi-script .cgi .pl
 Order allow,deny
 Allow from all
 SetEnv WEBSERVER_ROOT /usr/local/
 *Is it possible the above line should read /usr/local/tpp/ ???*
 #SetEnv WEBSERVER_ROOT /var/www
 /Directory

 - Jimmy



 On Sun, Apr 21, 2013 at 10:44 AM, Liam Bell bell.l...@gmail.com wrote:

 Nope - no dice...

 Changed the PERL_LIB_CORE= /usr/lib/perl/15.4.2/CORE in the
 makefile.conf.incl file before re-running the compiling/building.

 Ran through the rest fo the steps from
 http://tools.proteomecenter.org/wiki/index.php?title=TPP_4.5.2:_Installing_on_Ubuntu_10.04.3
  as
 I have done successfully before.

 The server is working still (localhost = It works!) but I can't load up
 http://localhost/tpp/cgi-bin/tpp_gui.pl - still getting The requested
 URL /tpp/cgi-bin/tpp_gui.pl was not found on this server.

 Any other ideas?

  --
 You received this message because you are subscribed to a topic in the
 Google Groups spctools-discuss group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/spctools-discuss/pzyzzSmoBxY/unsubscribe?hl=en
 .
 To unsubscribe from this group and all its topics, send an email to
 spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en
 .
 For more options, visit https://groups.google.com/groups/opt_out.




  --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com

Re: [spctools-discuss] Re: TPP 4.6.2 installation problem Ubuntu 12.04 LTS virtual box

2013-04-21 Thread Jimmy Eng
Liam,

Here's a snippet of what you posted 3 days ago.  The entry SetEvn
WEBSERVER_ROOT.  Set this to the same thing you have DocumentRoot set as
in apache2.conf.


 On Thu, Apr 18, 2013 at 2:08 PM, Liam Bell bell.l...@gmail.com wrote:

And triple checked the paths that were set in the
/etc/apache2/sites-available/tpp-4.6.2:
VirtualHost *:80

# directory to store data for web browser viewing
Alias /tpp/data /usr/local/tpp/data
Directory /usr/local/tpp/data
AllowOverride None
Options Indexes +FollowSymLinks Includes
Order allow,deny
Allow from all
/Directory

# directory for tpp's html resources (css, js, images, etc)
Alias /tpp/html /usr/local/tpp/html
Directory /usr/local/tpp/html
AllowOverride None
Options Includes Indexes FollowSymLinks MultiViews
Order allow,deny
Allow from all
/Directory

# directory for tpp's schema resources
Directory /usr/local/tpp/schema
AllowOverride None
Options Includes Indexes FollowSymLinks MultiViews
Order allow,deny
Allow from all
/Directory

# directory for tpp's executable files
ScriptAlias /tpp/cgi-bin /usr/local/tpp/cgi-bin
Directory /usr/local/tpp/cgi-bin
#AllowOverride AuthConfig Limit
AllowOverride All
Options Indexes +FollowSymLinks MultiViews ExecCGI +Includes
AddHandler default-handler .jpg .png .css .ico .gif
AddHandler cgi-script .cgi .pl
Order allow,deny
Allow from all
SetEnv WEBSERVER_ROOT /usr/local/
*Is it possible the above line should read /usr/local/tpp/ ???*
#SetEnv WEBSERVER_ROOT /var/www
/Directory

# Enables Lorikeet spectrum display program to work for linux
Alias /ISB /usr/local/tpp
/VirtualHost



On Sun, Apr 21, 2013 at 12:35 PM, Liam Bell bell.l...@gmail.com wrote:

 Where can I find what my WEBSERVER_ROOT variable has been set to? Which
 configuration file is it in?


 On 21 April 2013 21:25, Jimmy Eng jke...@gmail.com wrote:

 Liam.

 Your ServerRoot should probably still be /etc/apache2.  ServerRoot is
 the root of the directory tree where your apache config files are kept.
  What I was referring to was your DocumentRoot which is what your
 WEBSERVER_ROOT environment variable should be set to.  This DocumentRoot is
 the root directory where you serve your documents (i.e. index.html).


 On Sun, Apr 21, 2013 at 12:04 PM, Liam Bell bell.l...@gmail.com wrote:

 Hi Jimmy,

 I looked up the WEBSERVER_ROOT environment variable in my
 /etc/apache2/apche2.conf file.

 I think the relevant section is this:
 # Global configuration
 #

 #
 # ServerRoot: The top of the directory tree under which the server's
 # configuration, error, and log files are kept.
 #
 # NOTE!  If you intend to place this on an NFS (or otherwise network)
 # mounted filesystem then please read the LockFile documentation
 (available
 # at URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile
 );
 # you will save yourself a lot of trouble.
 #
 # Do NOT add a slash at the end of the directory path.
 #
 #ServerRoot /etc/apache2

 To which I edited so that,
 ServerRoot /var/www

 The index.html file you mentioned is located at /var/www/index.html

 I then restarted apache and got the following:
 liam@liam-Y500:~$ sudo /etc/init.d/apache2 restart
 [sudo] password for liam:
 apache2: Syntax error on line 240 of /etc/apache2/apache2.conf: Include
 directory '/var/www/mods-enabled' not found
 Action 'configtest' failed.
 The Apache error log may have more information.
...fail!

 What is going wrong? If I go to localhost in a web browser I still get
 It works!.


 On 21 April 2013 20:48, Jimmy Eng jke...@gmail.com wrote:

 Liam,

 At this point, perl is not the issue for you; our problem is definitely
 with your apache config.  The WEBSERVER_ROOT environment variable should
 point to your webserver's document root i.e. the directory that contains
 the index.html file that says It Works!.  Is it possibly /var/www/html?
  I can't imagine it being really /usr/local/ as you have in your apache2
 config file for TPP below.  One step at a time so fix that first, restart
 apache, and try and to access http://localhost/tpp/cgi-bin/tpp_gui.pl.
  If the page doesn't open, what is the error message in your apache error
 log file?

 # directory for tpp's executable files
 ScriptAlias /tpp/cgi-bin /usr/local/tpp/cgi-bin
 Directory /usr/local/tpp/cgi-bin
 #AllowOverride AuthConfig Limit
 AllowOverride All
 Options Indexes +FollowSymLinks MultiViews ExecCGI
 +Includes
 AddHandler default-handler .jpg .png .css .ico .gif

Re: [spctools-discuss] Re: TPP 4.6.2 installation problem Ubuntu 12.04 LTS virtual box

2013-04-21 Thread Jimmy Eng
Apache2 is a little foreign to me.  I've only setup one apache2/TPP server
which I don't have easy access to in order to take a look.  See if this
link helps you:

http://serverfault.com/questions/135993/how-do-i-change-the-document-root-of-a-linux-apache-server


On Sun, Apr 21, 2013 at 1:27 PM, Liam Bell bell.l...@gmail.com wrote:

 Hi Jimmy,

 I set SetEnv WEBSERVER_ROOT /var/www in my
 /etc/apache2/sites-available/tpp-4.6.2
 But when I checked for what my DocumentRoot variable was set to in my
 apache2.conf file but there was no variable for this set in that file.
 Where should this be set?

 Thanks for your help so far, I appreciate it.


 On 21 April 2013 21:48, Jimmy Eng jke...@gmail.com wrote:

 Liam,

 Here's a snippet of what you posted 3 days ago.  The entry SetEvn
 WEBSERVER_ROOT.  Set this to the same thing you have DocumentRoot set as
 in apache2.conf.


  On Thu, Apr 18, 2013 at 2:08 PM, Liam Bell bell.l...@gmail.com wrote:

 And triple checked the paths that were set in the
 /etc/apache2/sites-available/tpp-4.6.2:
 VirtualHost *:80

 # directory to store data for web browser viewing
 Alias /tpp/data /usr/local/tpp/data
 Directory /usr/local/tpp/data
 AllowOverride None
 Options Indexes +FollowSymLinks Includes
 Order allow,deny
 Allow from all
 /Directory

 # directory for tpp's html resources (css, js, images, etc)
 Alias /tpp/html /usr/local/tpp/html
 Directory /usr/local/tpp/html
 AllowOverride None
 Options Includes Indexes FollowSymLinks MultiViews
 Order allow,deny
 Allow from all
 /Directory

 # directory for tpp's schema resources
 Directory /usr/local/tpp/schema
 AllowOverride None
 Options Includes Indexes FollowSymLinks MultiViews
 Order allow,deny
 Allow from all
 /Directory

 # directory for tpp's executable files
 ScriptAlias /tpp/cgi-bin /usr/local/tpp/cgi-bin
 Directory /usr/local/tpp/cgi-bin
 #AllowOverride AuthConfig Limit
 AllowOverride All
 Options Indexes +FollowSymLinks MultiViews ExecCGI
 +Includes
 AddHandler default-handler .jpg .png .css .ico .gif
 AddHandler cgi-script .cgi .pl
 Order allow,deny
 Allow from all
 SetEnv WEBSERVER_ROOT /usr/local/
 *Is it possible the above line should read /usr/local/tpp/ ???*
 #SetEnv WEBSERVER_ROOT /var/www
 /Directory

 # Enables Lorikeet spectrum display program to work for linux
 Alias /ISB /usr/local/tpp
 /VirtualHost



 On Sun, Apr 21, 2013 at 12:35 PM, Liam Bell bell.l...@gmail.com wrote:

 Where can I find what my WEBSERVER_ROOT variable has been set to? Which
 configuration file is it in?


 On 21 April 2013 21:25, Jimmy Eng jke...@gmail.com wrote:

 Liam.

 Your ServerRoot should probably still be /etc/apache2.  ServerRoot is
 the root of the directory tree where your apache config files are kept.
  What I was referring to was your DocumentRoot which is what your
 WEBSERVER_ROOT environment variable should be set to.  This DocumentRoot is
 the root directory where you serve your documents (i.e. index.html).


 On Sun, Apr 21, 2013 at 12:04 PM, Liam Bell bell.l...@gmail.comwrote:

 Hi Jimmy,

 I looked up the WEBSERVER_ROOT environment variable in my
 /etc/apache2/apche2.conf file.

 I think the relevant section is this:
 # Global configuration
 #

 #
 # ServerRoot: The top of the directory tree under which the server's
 # configuration, error, and log files are kept.
 #
 # NOTE!  If you intend to place this on an NFS (or otherwise network)
 # mounted filesystem then please read the LockFile documentation
 (available
 # at URL:
 http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile);
 # you will save yourself a lot of trouble.
 #
 # Do NOT add a slash at the end of the directory path.
 #
 #ServerRoot /etc/apache2

 To which I edited so that,
 ServerRoot /var/www

 The index.html file you mentioned is located at /var/www/index.html

 I then restarted apache and got the following:
 liam@liam-Y500:~$ sudo /etc/init.d/apache2 restart
 [sudo] password for liam:
 apache2: Syntax error on line 240 of /etc/apache2/apache2.conf:
 Include directory '/var/www/mods-enabled' not found
 Action 'configtest' failed.
 The Apache error log may have more information.
...fail!

 What is going wrong? If I go to localhost in a web browser I still get
 It works!.


 On 21 April 2013 20:48, Jimmy Eng jke...@gmail.com wrote:

 Liam,

 At this point, perl is not the issue for you; our problem is
 definitely with your apache config.  The WEBSERVER_ROOT environment
 variable should point to your webserver's document root i.e. the 
 directory
 that contains

Re: [spctools-discuss] Instalation of TPP on Ubuntu 11.10 - make: /find_arch.sh: Command not found

2013-04-18 Thread Jimmy Eng
Is sudo make  make install possibly equivalent to sudo make followed
by make install?  If so, that explains the permissions issue with make
install.  It's easy to test.  Just do any of the following and see if the
copy permissions issue goes away

   sudo make;  sudo make install

or

   sudo make  sudo make install


On Thu, Apr 18, 2013 at 1:11 PM, Liam Bell bell.l...@gmail.com wrote:

 Yes I did, would that not work? Would it need to be a root terminal?


 On 18 April 2013 19:34, Joseph Slagel joseph.sla...@systemsbiology.orgwrote:

 You didn't per chance use sudo did  you?  as sudo make  make install?



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Instalation of TPP on Ubuntu 11.10 - make: /find_arch.sh: Command not found

2013-04-18 Thread Jimmy Eng
Just note that you link to setup instructions for TPP 4.5.2 but I'm
assuming you're installing the latest 4.6.2. version.  If so, look at the
end of the README file for the mod_rewrite settings that you'll need to add
to your apache config (and anything else that might've changed).


On Thu, Apr 18, 2013 at 1:29 PM, Liam Bell bell.l...@gmail.com wrote:

 I went another route,

 sudo -i

 and then;

 make  make install

 Ran without any issues...

 IT's setup and I've followed the instructions from 
 herehttp://tools.proteomecenter.org/wiki/index.php?title=TPP_4.5.2:_Installing_on_Ubuntu_10.04.3
  for
 setting up apache but now that does not seem to be working.

 I'll take a look through the forums and see what I can find about the
 matter...

 Thanks everybody for your help so far!


 On 18 April 2013 22:18, Jimmy Eng jke...@gmail.com wrote:

 Is sudo make  make install possibly equivalent to sudo make
 followed by make install?  If so, that explains the permissions issue
 with make install.  It's easy to test.  Just do any of the following and
 see if the copy permissions issue goes away

sudo make;  sudo make install

 or

sudo make  sudo make install


 On Thu, Apr 18, 2013 at 1:11 PM, Liam Bell bell.l...@gmail.com wrote:

 Yes I did, would that not work? Would it need to be a root terminal?


 On 18 April 2013 19:34, Joseph Slagel 
 joseph.sla...@systemsbiology.orgwrote:

 You didn't per chance use sudo did  you?  as sudo make  make
 install?

  --
 You received this message because you are subscribed to a topic in the
 Google Groups spctools-discuss group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/spctools-discuss/KrvgU-m6CmU/unsubscribe?hl=en
 .
 To unsubscribe from this group and all its topics, send an email to
 spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en
 .
 For more options, visit https://groups.google.com/groups/opt_out.




  --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Lorikeet does not produce image

2013-04-08 Thread Jimmy Eng
Damian,

I'll contact you offline to see if we can figure out what's happening.
 Both Lorikeet and old Comet viewer access the underlying spectra in the
same way.  So the fact that the old viewer is having an issue means that
Lorikeet will to.

- Jimmy


On Mon, Apr 8, 2013 at 12:15 PM, GATTACA dfer...@umich.edu wrote:

 Hi.

 I'm running TPP 4.6.2 on RHEL 6.4 (64bit) fully patched.
 Users are having problems viewing spectra in this version with lorikeet.

 It doesn't happen for all data sets but generally speaking the older the
 interact XML files are, the less likely they are to be viewable in lorikeet.

 If I click on the 'Ions2' link in a pepXML file I get a grey screen with
 nothing in the window. Here is the URL for the page:

 http://111.111.111.111/tpp/cgi-bin/plot-msms-js.cgi?PrecursorMassType=1FragmentMassType=0PepMass=1331.6054Pep=SLRPGTAEQDEKDta=/tpp.data/venky/UM_2013_1308.2013_03_05.091612/UM_2013_1308_50%/UM_2013_1308_50%.00028.00028.2.dta

 Nothing gets written to the error logs. The http access logs report this:
 [08/Apr/2013:15:01:53 -0400] GET
 /tpp/cgi-bin/plot-msms-js.cgi?PrecursorMassType=1FragmentMassType=0PepMass=1331.6054Pep=SLRPGTAEQDEKDta=/tpp.data/venky/UM_2013_1308.2013_03_05.091612/UM_2013_1308_50%/UM_2013_1308_50%.00028.00028.2.dta
 HTTP/1.1 200 2138

 Using the old Comet viewer at least generates an error on the browser:
 Error - cannot get scan number from input file
 /tpp.data/venky/UM_2013_1308.2013_03_05.091612/; unexpected dta name format

 The error logs for httpd also get something:
 [Mon Apr 08 15:09:56 2013] [error] [client 111.111.111.111] script not
 found or unable to stat: /usr/local/apps/tpp/cgi-bin/vbasrur, referer:
 http://111.111.1.111/tpp/cgi-bin/PepXMLViewer.cgi?page=1columns=Pprobability%2CGspectrum%2CSexpect%2CGions2%2CGpeptide%2CGprotein%2CGcalc_neutral_pep_mass%2CGionsdisplayState=columnsOptionsDivexportSpreadsheet=0sortField2=GspectrumsortDir2=0FmPprobability2=FMPprobability2=xmlFileName=%2Ftpp.data%2Fvbasrur%2FUM_2013_1308.2013_03_05.091612%2Finteract.pep.xmlperPage=50sortField=GspectrumsortDir=0highlightedPeptideText=highlightedProteinText=highlightedSpectrumText=expandProteinList=condenseminimizeTableHeaders=yesrequiredAA=requiredPeptideText=requiredProteinText=requiredSpectrumText=FmGnum_tol_term=FmShyperscore=FMShyperscore=FmSnextscore=FMSnextscore=FmSbscore=FMSbscore=FmSyscore=FMSyscore=FmSexpect=FMSexpect=FmPprobability=FMPprobability=jumpPage=1


 Any ideas for what's going on?
 Thanks,
 Damian

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] COMET searches with QExactive data

2013-04-01 Thread Jimmy Eng
Dave,

Mike Hoopmann implemented a nice fix for searches using small bins.  The
sources current in Comet SourceForge trunk should be fully functional.  One
new parameter entry is titled use_sparse_matrix (see Comet version
2013.01 params in link below) which controls whether or not the sparse
matrix data representation is used.  For small bins, this not only
addresses the memory use but those searches are also a little faster.

http://comet-ms.sourceforge.net/parameters/parameters_201301/

There's one other little feature I'd like to add before putting out another
official release but the sparse matrix data representation to deal with
this issue (memory use and small bin sizes) is the major feature in the
next version.  When the next release happens, I'll document memory use and
compare small vs. large bins on high res data (among other analysis).

- Jimmy


On Mon, Apr 1, 2013 at 2:33 PM, Dave Trudgian 
david.trudg...@utsouthwestern.edu wrote:

 Hi Philip,

 Sorry to drag up an old thread, but I wondered if you had info about the
 effects on results of limiting Comet memory usage by using a large bin, vs
 using the 64-bit binary and small bin?

 We are primarily working with QExactive data here. I was thinking of
 adding Comet to our pipeline, and checking on the bin-size effect on
 results - but cheekily wondered if you or anyone else have any feeling for
 this?

 Cheers,

 Dave Trudgian


 On Thursday, January 17, 2013 6:56:11 AM UTC-6, Philip Brownridge wrote:

 Hello Jimmy, thank you very much for your quick reply! You're completely
 correct about the fragment bin value, when I ran Comet (thank you for name
 guidance) using the LTQ settings it worked fine! When I ran it on the high
 resolution settings with task mamager on, I saw that I was getting the
 calloc error at about 2Gb of memory. I'm running Comet on a win7-64bit
 machine with 24Gb of memory and the program is called comet-win32, please
 could you tell me whether it would be possible to recompile Comet for win64
 and perform larger searches? Sorry if this is a naive question, i'm very
 much a beginner when it comes to code.
 thank you again,
 Philip

 On Wednesday, 16 January 2013 19:55:58 UTC, Jimmy Eng wrote:

 Philip,

 I'm guessing you're specifying a small fragment_bin_tol value for the
 high-res ms/ms spectra.  This causes Comet (not all capitalized) to use a
 ton of memory and you're just running out of memory.

 On the following UWPR SEQUEST page, there's a table correlating a set of
 fragment_bin_tol settings vs. # input spectra vs. memory used:
 https://proteomicsresource.**washington.edu/sequest_**
 release/release_201201.phphttps://proteomicsresource.washington.edu/sequest_release/release_201201.php
 These numbers apply to Comet as well.

 To address the problem, run smaller searches i.e. run Comet on a subset
 of your input spectra.  You can do this either with the scan_range
 parameter in the params file or simply invoke the searches such as
comet yourfile.mzXML:1-3000
comet yourfile.mzXML:3001-6000
comet yourfile.mzXML:6001-9000

 Your going to get multiple outputs if you do this which will need to be
 handled.  Splitting searches and managing results is something that some
 script should do for you; some day I'll throw some simple windows and linux
 example scripts on the Comet website for users.

 Hopefully we'll have a workaround to address the memory use for these
 small fragment_bin_tol values in the semi-near future (thanks to Mike
 Hoopmann here who has implemented sparse matrix support in the code which
 I'm actively working with now).

 - Jimmy


 On Wed, Jan 16, 2013 at 11:05 AM, Philip Brownridge 
 philip.b...@gmail.com wrote:

 Hello all, please can I apologise if I have posted this in the wrong
 group, but I can't find a COMET group and there are other COMET postings
 here so I hope someone can help me! I'm trying to use some QExactive data
 with Comet and I keep getting either a calloc error message or a message
 saying there is no search to perform. I can get COMET searches to work on
 LTQ Orbitrap data but not QExactive. I have been using Proteowizard to
 convert the RAW files to mzML. I'm using the default params file with high
 resolution fragment ion parameters.
 Please if anybody is using COMET with QExactive data please could they
 let me know where i'm wrong!
 thanks in advance,
 Philip

 --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
 To view this discussion on the web visit https://groups.google.com/d/**
 msg/spctools-discuss/-/**Jr5k5f4wcY0Jhttps://groups.google.com/d/msg/spctools-discuss/-/Jr5k5f4wcY0J
 .
 To post to this group, send email to spctools...@googlegroups.com.
 To unsubscribe from this group, send email to spctools-discu...@**
 googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/spctools-discuss?hl=enhttp://groups.google.com/group/spctools-discuss?hl=en
 .


  --
 You received

Re: [spctools-discuss] TPP 4.6.2 does not generate interact.prot.shtml or interact.pep.shtml

2013-03-27 Thread Jimmy Eng
No, you can still view the results on a linux system by just clicking on
the .pep.xml and .prot.xml files.  But to do this requires a change to your
apache configuration.  Look at the README for instructions on mod_rewrite.


On Wed, Mar 27, 2013 at 5:47 AM, GATTACA dfer...@umich.edu wrote:

 Yes I read that in the release notes but thought it applied to the windows
 environment. My system is Linux and I've never used the petunia interface
 before.

 Is the petunia interface now a requirement to viewing the XML files?

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Comet and multiple variable modifications

2013-02-22 Thread Jimmy Eng
Brian,

Will not work; there's currently no way to specify multiple variable mods
on the N-terminus so you'll have to manage with some work-around.  There
are plans to treat the terminal mods the same way as amino acid mods in the
code (and this was a good reminder to go implement it) but it will be a
while before that version is available.


On Fri, Feb 22, 2013 at 2:39 AM, Brian Hampton bhamp...@my.abrf.org wrote:

 I am trying to determine how to specify multiple variable modifications in
 the comet.params file.

 I have stable isotope labeled primary amines so I need to specify 2
 different mods on K and the N-terminus.

 Tandem uses [ to specify the N-terminus.  What notation does Comet use?

 In the below example, I am guessing (probably incorrectly) n is used for
 denoting the N-terminus to Comet.  Will this work?  In Tandem, a similar
 approach would not work but there is a work around.  For Comet, would I
 have to run two searches one for heavy and another for light using the
 variable_N_terminus=  line in the comet.params for specifying the
 N-terminus?

 variable_mod1 = 15.9949 M 0 3
 variable_mod2 = 42.04695 nK 0 3
 variable_mod3 = 48.084606 nK 0 3


 Thanks in advance for any help.

 Brian

 Brian Hampton
 Protein Analysis Lab
 Center for Vascular and Inflammatory Diseases
 University of Maryland School of Medicine
 800 West Baltimore Street Rm 307
 Baltimore  MD  21201
 V: 410-706-8207
 F: 410-706-8234

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to spctools-discuss+unsubscr...@googlegroups.com.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spctools-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to spctools-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/spctools-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [spctools-discuss] Re: JRAP question

2013-01-30 Thread Jimmy Eng
David,

You are centroiding -c with ReAdW and not with msconvert.  Remove
centroiding in ReAdW or add it to msconvert and then compare results.


On Wed, Jan 30, 2013 at 12:41 PM, David Zhao weizhao6...@gmail.com wrote:

 Hi Matt and Jimmy,

 I'm processing a Thermo Q-exactive generated raw file, and I see some
 discrenpencies in the mzXML ms/ms peak data and dta generated using
 mzxml2search tool by using msconvert or ReAdW-4.6.0.exe.
 mzXML was generated using standard parameters without any filter.
 ReAdW.exe --mzXML -c
 msconvert.exe --mzXML --32

 and I used mzxml2search from TPP to convert mzXML to dta files, please
 find attached dta files for scan 3, they are quite different.

 Thanks,

 David


 On Mon, Jan 21, 2013 at 1:08 PM, Jimmy Eng jke...@gmail.com wrote:
  David,
 
  ReAdW requires the corresponding XRawfile2.dll from Thermo's Xcalibur
  software; it currently won't work with the one from MSFileReader.  If I
 get
  time this week, I can see if I can get it working with MSFileReader.
 
 
  On Mon, Jan 21, 2013 at 12:51 PM, David Zhao weizhao6...@gmail.com
 wrote:
 
  Thanks, Jimmy. Another question (sorry :-): does ReAdW expects
  Xrawfile2.dll to reside in a particular directory? My PC is 64 bit
 Windows 7
  machine, I downloaded Msfilereader from Thermo (I don't have Xcalibur
  installed on my machine), and installed the 32 bit version. Msconvert
 works
  fine. However, even after I run regsvr32 C:\Program Files
  (x86)\Thermo\MSFileReader\XRawfile2.dll, ReAdW still gives me an error:
  Unable to initialize Xcalibur 2.0 interface; falling back to 1.4
 interface
  Unable to initialize XCalibur 1.4 interface; check installation
  hint: try running the command regsvr32
  C:\path_to_Xcalibur_dll\XRawfile2.dll
  unable to interface with Thermo library
 
  So first of all, can I use the dll from MSFileReader with ReAdW.exe, if
 I
  can, how should I register it so it'll be found the program.
  Thanks so much!
 
  David
 
  On Sun, Jan 20, 2013 at 11:22 AM, Jimmy Eng jke...@gmail.com wrote:
 
  You can find the ReAdW binary I use here:
  https://proteomicsresource.washington.edu/tools.php
 
  There's no huge changes from previous releases.  Mainly minor fixes to
  support the Q Exactive instrument, remove 0 intensity peaks for file
 size,
  etc.
 
 
  On Sun, Jan 20, 2013 at 8:50 AM, David Zhao weizhao6...@gmail.com
  wrote:
 
  Thanks Jimmy. I may need a updated version of ReAdW. Do you know if I
  can let ReAdW or msconvet to output raw file header information, such
 as
  method used, serial number, processing mode, etc.
 
  David
 
 
  On Wednesday, January 16, 2013 9:43:47 AM UTC-8, Jimmy Eng wrote:
 
  Yes, that is the standalone project which is now also used in the
 TPP.
 
  For converting Thermo raw files to mzXML, consider using msconvert
  which is now the default converter in the TPP.
 
  I happen to still use ReAdW though.  That code actually does get
 update
  as needed but it doesn't get built and distributed via the Sashimi
  SourceForge project anymore.  Feel free to email me offlist if you
 want an
  updated binary.
 
  There is no one who currently owns jrap.  Consider taking ownership
  of it yourself if you think you can update it.
 
 
  On Wed, Jan 16, 2013 at 8:40 AM, David Zhao weizh...@gmail.com
 wrote:
 
  Thanks, Jimmy. Is this the project you are referring to:
  https://code.google.com/p/mstoolkit/ ?
  Is ReAdW still the de facto tool to use to convert Thermo's raw
 files
  to mzXML? It hasn't been updated for years, but like you said,
 mzXML specs
  hasn't changed either.
 
  BTW, is jrap no longer supported? Could you point to the right
 person
  if it still is.
  Thanks,
 
  David
 
 
  On Wednesday, January 16, 2013 8:08:00 AM UTC-8, Jimmy Eng wrote:
 
  I believe MzXML2Search currently uses the mzParser code which you
 can
  find in MSToolkit.
 
 
  On Tue, Jan 15, 2013 at 10:23 PM, David Zhao weizh...@gmail.com
  wrote:
 
  Thanks Jimmy and Eric. I'll look into other alternatives as
  suggested. My problems that we have a lot of codes still rely on
 jrap to
  parse mzXML file, but jrap can't seem to parse mzXML generated by
 ReAdW from
  the latest Thermo q exactive instrument ( some scans return
 negative mz and
  intensities).
  However, mzxml2search from TPP can parse the mzXML just fine,
 which
  makes me think the jrap is out of date.  What parser does
 mzXML2search use
  in this case then?
  Thanks,
 
  David
 
  On Tuesday, January 15, 2013 9:25:41 PM UTC-8, Eric Deutsch wrote:
 
  And if you’re in the market for a Java implementation (like
 JRAP),
  then I’d recommend jmzreader:
 
  http://code.google.com/p/jmzreader/
 
 
 
  Cheers,
 
  Eric
 
 
 
 
 
  From: spctools...@googlegroups.com
  [mailto:spctools...@googlegroups.com] On Behalf Of Jimmy Eng
 
 
  Sent: Tuesday, January 15, 2013 9:18 PM
  To: spctools...@googlegroups.com
 
  Subject: Re: [spctools-discuss] Re: JRAP question
 
 
 
  At this point, you can probably not worry about jrap not being

Re: [spctools-discuss] Re: JRAP question

2013-01-20 Thread Jimmy Eng
Hopefully the msconvert developers will chime in on whether this info is
already or can be exported; this certainly won't happen for ReAdW.


On Sun, Jan 20, 2013 at 8:50 AM, David Zhao weizhao6...@gmail.com wrote:

 Thanks Jimmy. I may need a updated version of ReAdW. Do you know if I can
 let ReAdW or msconvet to output raw file header information, such as method
 used, serial number, processing mode, etc.

 David


 On Wednesday, January 16, 2013 9:43:47 AM UTC-8, Jimmy Eng wrote:

 Yes, that is the standalone project which is now also used in the TPP.

 For converting Thermo raw files to mzXML, consider using msconvert which
 is now the default converter in the TPP.

 I happen to still use ReAdW though.  That code actually does get update
 as needed but it doesn't get built and distributed via the Sashimi
 SourceForge project anymore.  Feel free to email me offlist if you want an
 updated binary.

 There is no one who currently owns jrap.  Consider taking ownership of
 it yourself if you think you can update it.


 On Wed, Jan 16, 2013 at 8:40 AM, David Zhao weizh...@gmail.com wrote:

 Thanks, Jimmy. Is this the project you are referring to:
 https://code.google.com/p/**mstoolkit/https://code.google.com/p/mstoolkit/?
 Is ReAdW still the de facto tool to use to convert Thermo's raw files to
 mzXML? It hasn't been updated for years, but like you said, mzXML specs
 hasn't changed either.

 BTW, is jrap no longer supported? Could you point to the right person if
 it still is.
 Thanks,

 David


 On Wednesday, January 16, 2013 8:08:00 AM UTC-8, Jimmy Eng wrote:

 I believe MzXML2Search currently uses the mzParser code which you can
 find in MSToolkit.


 On Tue, Jan 15, 2013 at 10:23 PM, David Zhao weizh...@gmail.comwrote:

 Thanks Jimmy and Eric. I'll look into other alternatives as suggested.
 My problems that we have a lot of codes still rely on jrap to parse mzXML
 file, but jrap can't seem to parse mzXML generated by ReAdW from the 
 latest
 Thermo q exactive instrument ( some scans return negative mz and
 intensities).
 However, mzxml2search from TPP can parse the mzXML just fine, which
 makes me think the jrap is out of date.  What parser does mzXML2search use
 in this case then?
 Thanks,

 David

 On Tuesday, January 15, 2013 9:25:41 PM UTC-8, Eric Deutsch wrote:

 And if you’re in the market for a Java implementation (like JRAP),
 then I’d recommend jmzreader:

 http://code.google.com/p/**jmzreader/http://code.google.com/p/jmzreader/



 Cheers,

 Eric





 *From:* spctools...@googlegroups.**com [mailto:spctools...@**
 googlegroups.com] *On Behalf Of *Jimmy Eng

 *Sent:* Tuesday, January 15, 2013 9:18 PM
 *To:* spctools...@googlegroups.**com

 *Subject:* Re: [spctools-discuss] Re: JRAP question



 At this point, you can probably not worry about jrap not being
 updated because the mzXML format isn't really in flux.  But if you want 
 to
 look for alternatives ...



 Quite a few folks here would tell you to look at ProteoWizard
 http://proteowizard.**sourceforge.net/http://proteowizard.sourceforge.net/

 Possibly look into OpenMS (I honestly have no clue how pertinent this
 is):  
 http://open-ms.sourceforge.**net/http://open-ms.sourceforge.net/

 Also MSToolkit:  
 https://code.google.com/p/**mstoolkit/https://code.google.com/p/mstoolkit/



 Other folks should chime in if there are other parsing libraries out
 there (which I know there are).



 On Tue, Jan 15, 2013 at 7:20 PM, David Zhao weizh...@gmail.com
 wrote:

 Thanks Jimmy! So it sounds like jrap is not being updated anymore. If
 i'd like to keep up to date with a mzXML parser, which library should I
 follow?

 Thanks,



 David



 On Tuesday, January 15, 2013 7:14:17 PM UTC-8, Jimmy Eng wrote:

 I'm not a jrap user nor do I know what file are associated with what
 versions.  But if you are looking for jrap_3.0.jar and
 jakarta-regexp-1.4.jar, you can find  them here:  http://sashimi.svn.
 **sourceforge.net/viewvc/**sashimi/trunk/**jra**p/sax2/http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/jrap/sax2/



 --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
  To post to this group, send email to spctools...@googlegroups.**com.
 To unsubscribe from this group, send email to spctools-discu...@**
 googlegroups.com.

 For more options, visit this group at http://groups.google.com/**
 group/spctools-discuss?hl=enhttp://groups.google.com/group/spctools-discuss?hl=en
 .

  --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
 To view this discussion on the web visit https://groups.google.com/d/*
 *ms**g/spctools-discuss/-/**fGumFv11I**eEJhttps://groups.google.com/d/msg/spctools-discuss/-/fGumFv11IeEJ
 .

 To post to this group, send email to spctools...@googlegroups.**com.
 To unsubscribe from this group, send email to spctools-discu...@**
 googlegroups**.com.
 For more options, visit this group at http

Re: [spctools-discuss] COMET searches with QExactive data

2013-01-17 Thread Jimmy Eng
I just emailed you off-list regarding the x64 binary.

On Thu, Jan 17, 2013 at 4:56 AM, Philip Brownridge 
philip.brownri...@gmail.com wrote:

 Hello Jimmy, thank you very much for your quick reply! You're completely
 correct about the fragment bin value, when I ran Comet (thank you for name
 guidance) using the LTQ settings it worked fine! When I ran it on the high
 resolution settings with task mamager on, I saw that I was getting the
 calloc error at about 2Gb of memory. I'm running Comet on a win7-64bit
 machine with 24Gb of memory and the program is called comet-win32, please
 could you tell me whether it would be possible to recompile Comet for win64
 and perform larger searches? Sorry if this is a naive question, i'm very
 much a beginner when it comes to code.
 thank you again,
 Philip



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: JRAP question

2013-01-16 Thread Jimmy Eng
I believe MzXML2Search currently uses the mzParser code which you can find
in MSToolkit.


On Tue, Jan 15, 2013 at 10:23 PM, David Zhao weizhao6...@gmail.com wrote:

 Thanks Jimmy and Eric. I'll look into other alternatives as suggested. My
 problems that we have a lot of codes still rely on jrap to parse mzXML
 file, but jrap can't seem to parse mzXML generated by ReAdW from the latest
 Thermo q exactive instrument ( some scans return negative mz and
 intensities).
 However, mzxml2search from TPP can parse the mzXML just fine, which makes
 me think the jrap is out of date.  What parser does mzXML2search use in
 this case then?
 Thanks,

 David

 On Tuesday, January 15, 2013 9:25:41 PM UTC-8, Eric Deutsch wrote:

 And if you’re in the market for a Java implementation (like JRAP), then
 I’d recommend jmzreader:

 http://code.google.com/p/**jmzreader/http://code.google.com/p/jmzreader/



 Cheers,

 Eric





 *From:* spctools...@googlegroups.**com [mailto:spctools...@**
 googlegroups.com] *On Behalf Of *Jimmy Eng

 *Sent:* Tuesday, January 15, 2013 9:18 PM
 *To:* spctools...@googlegroups.**com

 *Subject:* Re: [spctools-discuss] Re: JRAP question



 At this point, you can probably not worry about jrap not being updated
 because the mzXML format isn't really in flux.  But if you want to look for
 alternatives ...



 Quite a few folks here would tell you to look at ProteoWizard
 http://proteowizard.**sourceforge.net/http://proteowizard.sourceforge.net/

 Possibly look into OpenMS (I honestly have no clue how pertinent this
 is):  http://open-ms.sourceforge.**net/ http://open-ms.sourceforge.net/

 Also MSToolkit:  
 https://code.google.com/p/**mstoolkit/https://code.google.com/p/mstoolkit/



 Other folks should chime in if there are other parsing libraries out
 there (which I know there are).



 On Tue, Jan 15, 2013 at 7:20 PM, David Zhao weizh...@gmail.com wrote:

 Thanks Jimmy! So it sounds like jrap is not being updated anymore. If i'd
 like to keep up to date with a mzXML parser, which library should I follow?

 Thanks,



 David



 On Tuesday, January 15, 2013 7:14:17 PM UTC-8, Jimmy Eng wrote:

 I'm not a jrap user nor do I know what file are associated with what
 versions.  But if you are looking for jrap_3.0.jar and
 jakarta-regexp-1.4.jar, you can find  them here:  http://sashimi.svn.**
 sourceforge.net/viewvc/**sashimi/trunk/jrap/sax2/http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/jrap/sax2/



 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools...@googlegroups.**com.
 To unsubscribe from this group, send email to spctools-discu...@**
 googlegroups.com.

 For more options, visit this group at http://groups.google.com/**
 group/spctools-discuss?hl=enhttp://groups.google.com/group/spctools-discuss?hl=en
 .

  --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/spctools-discuss/-/fGumFv11IeEJ.

 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: JRAP question

2013-01-16 Thread Jimmy Eng
Yes, that is the standalone project which is now also used in the TPP.

For converting Thermo raw files to mzXML, consider using msconvert which is
now the default converter in the TPP.

I happen to still use ReAdW though.  That code actually does get update as
needed but it doesn't get built and distributed via the Sashimi SourceForge
project anymore.  Feel free to email me offlist if you want an updated
binary.

There is no one who currently owns jrap.  Consider taking ownership of it
yourself if you think you can update it.


On Wed, Jan 16, 2013 at 8:40 AM, David Zhao weizhao6...@gmail.com wrote:

 Thanks, Jimmy. Is this the project you are referring to:
 https://code.google.com/p/mstoolkit/ ?
 Is ReAdW still the de facto tool to use to convert Thermo's raw files to
 mzXML? It hasn't been updated for years, but like you said, mzXML specs
 hasn't changed either.

 BTW, is jrap no longer supported? Could you point to the right person if
 it still is.
 Thanks,

 David


 On Wednesday, January 16, 2013 8:08:00 AM UTC-8, Jimmy Eng wrote:

 I believe MzXML2Search currently uses the mzParser code which you can
 find in MSToolkit.


 On Tue, Jan 15, 2013 at 10:23 PM, David Zhao weizh...@gmail.com wrote:

 Thanks Jimmy and Eric. I'll look into other alternatives as suggested.
 My problems that we have a lot of codes still rely on jrap to parse mzXML
 file, but jrap can't seem to parse mzXML generated by ReAdW from the latest
 Thermo q exactive instrument ( some scans return negative mz and
 intensities).
 However, mzxml2search from TPP can parse the mzXML just fine, which
 makes me think the jrap is out of date.  What parser does mzXML2search use
 in this case then?
 Thanks,

 David

 On Tuesday, January 15, 2013 9:25:41 PM UTC-8, Eric Deutsch wrote:

 And if you’re in the market for a Java implementation (like JRAP), then
 I’d recommend jmzreader:

 http://code.google.com/p/**jmzre**ader/http://code.google.com/p/jmzreader/



 Cheers,

 Eric





 *From:* spctools...@googlegroups.**com [mailto:spctools...@**googlegrou
 **ps.com] *On Behalf Of *Jimmy Eng

 *Sent:* Tuesday, January 15, 2013 9:18 PM
 *To:* spctools...@googlegroups.**com

 *Subject:* Re: [spctools-discuss] Re: JRAP question



 At this point, you can probably not worry about jrap not being updated
 because the mzXML format isn't really in flux.  But if you want to look for
 alternatives ...



 Quite a few folks here would tell you to look at ProteoWizard
 http://proteowizard.**sourceforg**e.net/http://proteowizard.sourceforge.net/

 Possibly look into OpenMS (I honestly have no clue how pertinent this
 is):  http://open-ms.sourceforge.**ne**t/http://open-ms.sourceforge.net/

 Also MSToolkit:  
 https://code.google.com/p/**mst**oolkit/https://code.google.com/p/mstoolkit/



 Other folks should chime in if there are other parsing libraries out
 there (which I know there are).



 On Tue, Jan 15, 2013 at 7:20 PM, David Zhao weizh...@gmail.com wrote:

 Thanks Jimmy! So it sounds like jrap is not being updated anymore. If
 i'd like to keep up to date with a mzXML parser, which library should I
 follow?

 Thanks,



 David



 On Tuesday, January 15, 2013 7:14:17 PM UTC-8, Jimmy Eng wrote:

 I'm not a jrap user nor do I know what file are associated with what
 versions.  But if you are looking for jrap_3.0.jar and
 jakarta-regexp-1.4.jar, you can find  them here:  http://sashimi.svn.**
 sourceforg**e.net/viewvc/**sashimi/trunk/**jrap/sax2/http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/jrap/sax2/



 --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
  To post to this group, send email to spctools...@googlegroups.**com.
 To unsubscribe from this group, send email to spctools-discu...@**
 googlegroups**.com.

 For more options, visit this group at http://groups.google.com/**group*
 */spctools-discuss?hl=enhttp://groups.google.com/group/spctools-discuss?hl=en
 .

  --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
 To view this discussion on the web visit https://groups.google.com/d/**
 msg/spctools-discuss/-/**fGumFv11IeEJhttps://groups.google.com/d/msg/spctools-discuss/-/fGumFv11IeEJ
 .

 To post to this group, send email to spctools...@googlegroups.**com.
 To unsubscribe from this group, send email to spctools-discu...@**
 googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/spctools-discuss?hl=enhttp://groups.google.com/group/spctools-discuss?hl=en
 .


  --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/spctools-discuss/-/XUSCMDHbbCUJ.

 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group

Re: [spctools-discuss] COMET searches with QExactive data

2013-01-16 Thread Jimmy Eng
Philip,

I'm guessing you're specifying a small fragment_bin_tol value for the
high-res ms/ms spectra.  This causes Comet (not all capitalized) to use a
ton of memory and you're just running out of memory.

On the following UWPR SEQUEST page, there's a table correlating a set of
fragment_bin_tol settings vs. # input spectra vs. memory used:
https://proteomicsresource.washington.edu/sequest_release/release_201201.php
These numbers apply to Comet as well.

To address the problem, run smaller searches i.e. run Comet on a subset of
your input spectra.  You can do this either with the scan_range parameter
in the params file or simply invoke the searches such as
   comet yourfile.mzXML:1-3000
   comet yourfile.mzXML:3001-6000
   comet yourfile.mzXML:6001-9000

Your going to get multiple outputs if you do this which will need to be
handled.  Splitting searches and managing results is something that some
script should do for you; some day I'll throw some simple windows and linux
example scripts on the Comet website for users.

Hopefully we'll have a workaround to address the memory use for these small
fragment_bin_tol values in the semi-near future (thanks to Mike Hoopmann
here who has implemented sparse matrix support in the code which I'm
actively working with now).

- Jimmy


On Wed, Jan 16, 2013 at 11:05 AM, Philip Brownridge 
philip.brownri...@gmail.com wrote:

 Hello all, please can I apologise if I have posted this in the wrong
 group, but I can't find a COMET group and there are other COMET postings
 here so I hope someone can help me! I'm trying to use some QExactive data
 with Comet and I keep getting either a calloc error message or a message
 saying there is no search to perform. I can get COMET searches to work on
 LTQ Orbitrap data but not QExactive. I have been using Proteowizard to
 convert the RAW files to mzML. I'm using the default params file with high
 resolution fragment ion parameters.
 Please if anybody is using COMET with QExactive data please could they let
 me know where i'm wrong!
 thanks in advance,
 Philip

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/spctools-discuss/-/Jr5k5f4wcY0J.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: JRAP question

2013-01-15 Thread Jimmy Eng
At this point, you can probably not worry about jrap not being updated
because the mzXML format isn't really in flux.  But if you want to look for
alternatives ...

Quite a few folks here would tell you to look at ProteoWizard
http://proteowizard.sourceforge.net/
Possibly look into OpenMS (I honestly have no clue how pertinent this is):
http://open-ms.sourceforge.net/
Also MSToolkit:  https://code.google.com/p/mstoolkit/

Other folks should chime in if there are other parsing libraries out there
(which I know there are).

On Tue, Jan 15, 2013 at 7:20 PM, David Zhao weizhao6...@gmail.com wrote:

 Thanks Jimmy! So it sounds like jrap is not being updated anymore. If i'd
 like to keep up to date with a mzXML parser, which library should I follow?
 Thanks,

 David


 On Tuesday, January 15, 2013 7:14:17 PM UTC-8, Jimmy Eng wrote:

 I'm not a jrap user nor do I know what file are associated with what
 versions.  But if you are looking for jrap_3.0.jar and
 jakarta-regexp-1.4.jar, you can find  them here:  http://sashimi.svn.**
 sourceforge.net/viewvc/**sashimi/trunk/jrap/sax2/http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/jrap/sax2/




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Error when running XPRESS

2012-12-12 Thread Jimmy Eng
Carlos,

Unless I don't understand how XPRESS currently runs, I don't believe it
will work on your separate light and heavy searches.  This doesn't explain
the failure to run error but it's worth mentioning up front.  XPRESS
expects a variable modification to denote the difference between your
light peptides and your heavy peptides.  I would suggest you run the
search as follows:  add static modifications n-terminus and lysine for
light dimethyl and add variable modifications of 6.03 to both n-terminus
and lysine.  This way, you do a single search and identify both heavy and
light at once in a format compatible with XPRESS.  I believe you should be
able to use ASAPRatio with your separate searches but I've not paid enough
attention to that tool to be able to answer your question on how to set its
options.  If no one chimes in soon, search the archives of this group.

The failure to run XPRESS likely is associated with the error Unknown file
type. No file loaded. which I'm guessing means the corresponding spectral
data (in say mzXML or mzML format) is not recorded in the .tandem.pep.xml
files as the TPP expects.  So it doesn't know what it is and can't read the
spectral data.  When naming conventions are not followed strictly, things
tend to fall apart.  Your basename.tandem.pep.xml file names worry me a
bit as the standard convention is basename.pep.xml.  Go ahead and send me
one of your .tandem.pep.xml files directly (or make it available for me to
download somehow) and I'll take a look as I suspect there could be an issue
with the base_name attributes.  And tell me what your Tandem xml files
are named.

Unless other Tandem/TPP experts chime in otherwise, try sticking to the
following naming convention:

Tandem file named:  1F0.xtan.xml
pep.xml file named:  1F0.pep.xml (with base_name attribute value 1F0 or
some variant of that i.e. ./1F0)
mzXML file named:  1F0.mzXML

- Jimmy


On Wed, Dec 12, 2012 at 5:04 PM, Carlos cchaves...@gmail.com wrote:

 Dear all

 I am trying to analyze my dimethylation data using XPRESS tool in TPP but
 with no success so far...

 I did two XTandem searches for my mzXML files: one for light dimethyl
 (+28@N-terminus and @K) and another one for heavy (+34@N-terminus and @K)
 and merged them using the 'Analyze peptides' tab.
 The PeptideProphet analysis worked fine, however when I check the 'Run
 XPRESS' option I keep getting an error message (see bellow).

 running: C:/Inetpub/tpp-bin/XPressPeptideParser
 XTANDEM_BR1_interact.pep.xml -m0.5 -nn,6.032 -nK,6.032 -c6 -p1
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 WARNING: Found more than one variable mod on N-terminus.
 Unknown file type. No file loaded.

 command C:/Inetpub/tpp-bin/XPressPeptideParser
 XTANDEM_BR1_interact.pep.xml -m0.5 -nn,6.032 -nK,6.032 -c6 -p1 failed:
 Unknown error

 command C:/Inetpub/tpp-bin/XPressPeptideParser
 XTANDEM_BR1_interact.pep.xml -m0.5 -nn,6.032 -nK,6.032 -c6 -p1 exited
 with non-zero exit code: 255
 QUIT - the job is incomplete

 command c:\Inetpub\tpp-bin\xinteract -NXTANDEM_BR1_interact.pep.xml
 -p0.05 -l7 -OAN -d_R -X-m0.5-nn,6.032-nK,6.032-c6-p1 1F0.tandem.pep.xml
 1F0H.tandem.pep.xml 1F1.tandem.pep.xml 1F1H.tandem.pep.xml
 1F2.tandem.pep.xml 1F2H.tandem.pep.xml 1F3.tandem.pep.xml
 1F3H.tandem.pep.xml 1F4.tandem.pep.xml 1F4H.tandem.pep.xml
 1F5.tandem.pep.xml 1F5H.tandem.pep.xml F0.tandem.pep.xml F0H.tandem.pep.xml
 F1.tandem.pep.xml F1H.tandem.pep.xml F2.tandem.pep.xml F2H.tandem.pep.xml
 F3.tandem.pep.xml F3H.tandem.pep.xml F4.tandem.pep.xml F4H.tandem.pep.xml
 failed: Unknown error
 Command FAILED
 RETURN CODE:65280
 

 My other question is regarding ASAPRatio
 How to use it when performing dimethylation quantitation? I mean, since I
 have two modifications for K (light and heavy dimethyl) and two for the
 N-terminus, which 

Re: [spctools-discuss] Re: Problem with reading mzXML file with Sequest input

2012-12-05 Thread Jimmy Eng
Chris,

Try setting the msms_run_summary base_name and search_summary base_name
to the name of the mzXML file.  In your example below, have both
base_name=e120823DC_EC_04and make sure e120823DC_EC_04.mzXML exists in
the same directory as the interact.pep.xml.

See if that works to allow you to view the spectra.

On Tue, Dec 4, 2012 at 10:09 AM, Chris McKennan cg...@cornell.edu wrote:

 To clarify, I am using Sorcerer (from Sage-N) located on a different
 drive. So, in order to take full advantage of TPP, I have to convert the
 .out files into a pep.xml file that can be read peptide prophet, inter
 prophet, and protein prophet. The problem I am having is then being able to
 view the Spectra in the interact.pep.xml viewer, since no where in original
 Sequest-TPP input did it specify a path for either the .dta or .mzXML file
 (it only asks for the .out files and a parameter file).

 I have tried to manually incorporate a path to the mzXML in the
 interact.pep.xml by using:
 msms_run_summary
 base_name=c:\Inetpub\wwwroot\ISB\data\EColi\e120823_EC_1uL\SorcererTests\Parametric
 search_engine=Sequest msManufacturer=Thermo Scientific msModel=LTQ
 Orbitrap Elite msIonization=nanoelectrospray msMassAnalyzer=radial
 ejection linear ion trap msDetector=electron multiplier
 raw_data_type=raw raw_data=.mzXML
 parameter name=spectrum, path
 value=c:\Inetpub\wwwroot\ISB\data\EColi\e120823_EC_1uL\SorcererTests\Parametric\e120823DC_EC_04.mzXML/

 (My hope in doing this is to be able to combine multiple analyses into a
 interact.iproph.pep.xml and be able to view the spectra there).

 Is there any way to manipulate the interact.pep.xml further so I can view
 the spectra? Thank you in advance

 -Chris McKennan



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] process Comet results with Peptideprophet

2012-11-20 Thread Jimmy Eng
Zeyu,

Support for this new version of Comet is not in the released versions of
the TPP yet; looks like it's targeted for TPP 4.7.  You'll have to build
PeptideProphetParser from trunk sources if you can't wait.


On Sun, Nov 18, 2012 at 12:29 AM, zeyu sun szy3...@gmail.com wrote:

 Dear TPP team,
   I went through a Comet analysis for a set of Qtof data (mzXML format)
 and when I try to view the results (pepXML) in TPP, the doteproduct, delta
 and most Expect value of spectrum are 'unavailable' , and as I expected,
 the following peptideprophet analysis failed also.

 Here is my output setting for the Comet search, is there any thing wrong
 with it?
 #
 output_sqtstream = 0   # 0=no, 1=yes  write sqt to
 standard output
 output_sqtfile = 0 # 0=no, 1=yes  write sqt file
 output_pepxmlfile = 1  # 0=no, 1=yes  write pep.xml file
 output_outfiles = 0# 0=no, 1=yes  write .out files
 print_expect_score = 0 # 0=no, 1=yes to replace Sp with
 expect in out  sqt
 num_output_lines = 5   # num peptide results to show
 show_fragment_ions = 0 # 0=no, 1=yes for out files only
 sample_enzyme_number = 1   # Sample enzyme which is possibly
 different than the one applied to the search.
 #
 Any ideas?
 Thank you!

 --
 *SunSun*

  --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] converting shimazdu ms files to mzXML

2012-10-23 Thread Jimmy Eng
Maybe look into Mass++ that was first posted here on spctools-discuss
back in 2008::

https://groups.google.com/forum/?hl=enfromgroups#!forum/massplusplus
http://www.first-ms3d.jp/english/mass2

On Tue, Oct 23, 2012 at 4:15 PM, Gautam Saxena gsaxena...@gmail.com wrote:
 Does anyone know of a tool that can convert Shimazdu MS files to mzXML
 (preferred) or mgf/mzML etc.? Based on the website of ProteoWizard, it
 doesn't look like the Shimazdu files are supported

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] x!tandem point mutations

2012-10-02 Thread Jimmy Eng
I'm not aware of any way to do what you want to do.  And
realistically, support for this probably isn't going to occur anytime
soon just because it's not an analysis that's commonly done.

On Tue, Oct 2, 2012 at 2:42 AM, Maik Boehmer maik.boeh...@gmail.com wrote:
 Hi,

 for mass spec analysis of an unsequenced organism we are using X!tandem to
 search against the FASTA database of a related organism allowing for point
 mutations in refinement mode. After transformation of the tandem.xml search
 results to pep.xml, peptides containing point mutations are still present
 but are shown with the wrong sequence information (the one from the FASTA
 file, point mutation not considered) and masses. Is there any way that
 peptides containing point mutation are shown also in the pep.xml file with
 an annotation marking them as mutated or preferentially with the correct
 sequence information.

 Thanks.

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/spctools-discuss/-/jWPYogAxfmMJ.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: tandem2xml gives invalid pointer error on linux

2012-08-24 Thread Jimmy Eng
FWIW, just doing something as simple as changing the name from
zz85-control-8-003-9 to zz85-control-8-003-9y allows the
conversion to go through fine.  Hopefully someone with time will look
at the convoluted input handling in Tandem2XML as the logic being used
to parse input file name and recognize file extensions is horribly
broken.

On Fri, Aug 24, 2012 at 7:56 AM, Dave Trudgian
david.trudg...@utsouthwestern.edu wrote:
 Gautum,

 Maybe this is related to the problem I observed that seems filename
 dependent:

 https://groups.google.com/d/topic/spctools-discuss/lp9qde0b2OU/discussion

 If you rename the files do they process correctly? Is there anything common
 about the filenames of the 35 that don't work? If you run Tandem2XML against
 the gzipped file does it work?

 DT

 If you try to run the command with the


 On Friday, August 24, 2012 9:39:21 AM UTC-5, Gautam Saxena wrote:

 I have TPP v4.5 RAPTURE rev 2, Build 201208061232 (linux) on Centos 6.3
 (all up to date, 64 bit) installed and pretty much working. For one project,
 though, we ran tandem2xml on 632 X!Tandem files. For 597 of such files, it
 converted fine. For 35 though, we got the invalid pointer error as
 follows:

 *** glibc detected ***
 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML:
 free(): invalid pointer: 0x016fba40 ***
 === Backtrace: =
 /lib64/libc.so.6[0x3e316753c6]
 /usr/lib64/libstdc++.so.6(_ZNSsD1Ev+0x39)[0x3e34a9d4a9]

 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML[0x413821]

 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML[0x405f4e]
 /lib64/libc.so.6(__libc_start_main+0xfd)[0x3e3161ecdd]

 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML[0x405c89]
 === Memory map: 
 0040-00469000 r-xp  00:18 32638052
 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML
 00669000-0066b000 rw-p 00069000 00:18 32638052
 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML
 0066b000-0066d000 rw-p  00:00 0
 016f8000-01719000 rw-p  00:00 0
 [heap]
 3e3120-3e3122 r-xp  fd:00 6809
 /lib64/ld-2.12.so
 3e3141f000-3e3142 r--p 0001f000 fd:00 6809
 /lib64/ld-2.12.so
 3e3142-3e31421000 rw-p 0002 fd:00 6809
 /lib64/ld-2.12.so
 3e31421000-3e31422000 rw-p  00:00 0
 3e3160-3e31789000 r-xp  fd:00 6810
 /lib64/libc-2.12.so
 3e31789000-3e31988000 ---p 00189000 fd:00 6810
 /lib64/libc-2.12.so
 3e31988000-3e3198c000 r--p 00188000 fd:00 6810
 /lib64/libc-2.12.so
 3e3198c000-3e3198d000 rw-p 0018c000 fd:00 6810
 /lib64/libc-2.12.so
 3e3198d000-3e31992000 rw-p  00:00 0
 3e31e0-3e31e17000 r-xp  fd:00 6811
 /lib64/libpthread-2.12.so
 3e31e17000-3e32017000 ---p 00017000 fd:00 6811
 /lib64/libpthread-2.12.so
 3e32017000-3e32018000 r--p 00017000 fd:00 6811
 /lib64/libpthread-2.12.so
 3e32018000-3e32019000 rw-p 00018000 fd:00 6811
 /lib64/libpthread-2.12.so
 3e32019000-3e3201d000 rw-p  00:00 0
 3e3220-3e32215000 r-xp  fd:00 48060
 /lib64/libz.so.1.2.3
 3e32215000-3e32414000 ---p 00015000 fd:00 48060
 /lib64/libz.so.1.2.3
 3e32414000-3e32415000 r--p 00014000 fd:00 48060
 /lib64/libz.so.1.2.3
 3e32415000-3e32416000 rw-p 00015000 fd:00 48060
 /lib64/libz.so.1.2.3
 3e3260-3e32683000 r-xp  fd:00 4968
 /lib64/libm-2.12.so
 3e32683000-3e32882000 ---p 00083000 fd:00 4968
 /lib64/libm-2.12.so
 3e32882000-3e32883000 r--p 00082000 fd:00 4968
 /lib64/libm-2.12.so
 3e32883000-3e32884000 rw-p 00083000 fd:00 4968
 /lib64/libm-2.12.so
 3e33a0-3e33a1 r-xp  fd:00 6655
 /lib64/libbz2.so.1.0.4
 3e33a1-3e33c0f000 ---p 0001 fd:00 6655
 /lib64/libbz2.so.1.0.4
 3e33c0f000-3e33c11000 rw-p f000 fd:00 6655
 /lib64/libbz2.so.1.0.4
 3e3420-3e34216000 r-xp  fd:00 6814
 /lib64/libgcc_s-4.4.6-20120305.so.1
 3e34216000-3e34415000 ---p 00016000 fd:00 6814
 /lib64/libgcc_s-4.4.6-20120305.so.1
 3e34415000-3e34416000 rw-p 00015000 fd:00 6814
 /lib64/libgcc_s-4.4.6-20120305.so.1
 3e34a0-3e34ae8000 r-xp  fd:00 6815
 /usr/lib64/libstdc++.so.6.0.13
 3e34ae8000-3e34ce8000 ---p 000e8000 fd:00 6815
 /usr/lib64/libstdc++.so.6.0.13
 3e34ce8000-3e34cef000 r--p 000e8000 fd:00 6815
 /usr/lib64/libstdc++.so.6.0.13
 3e34cef000-3e34cf1000 rw-p 000ef000 fd:00 6815
 /usr/lib64/libstdc++.so.6.0.13
 3e34cf1000-3e34d06000 rw-p  00:00 0
 2adaee375000-2adaee376000 rw-p  00:00 0
 2adaee382000-2adaee389000 rw-p  00:00 0
 7fffc87d1000-7fffc87e6000 rw-p  00:00 0
 [stack]
 7fffc87ff000-7fffc880 r-xp  00:00 0
 [vdso]
 ff60-ff601000 r-xp  00:00 0
 [vsyscall]



 The program was run as follows:


 /usr/ia_working_dir/DASH/DASH_Server/IA_Common/programs/linux_programs/tpp/bin/Tandem2XML
 zz85-control-8-003-9 

Re: [spctools-discuss] error while using TPP to view annotated spectra while browsing pepXML annotations

2012-08-02 Thread Jimmy Eng
Jessie,

The spectral viewer tries to  read the spectra in .mzXML, .mzML, .dta
or collection of .dta compressed into a .tgz archive.  So it doesn't
assume your data is in a .tgz file, that was just one of the attempts
it went through to try and access the underlying spectral data to
display and you saw the error message when that attempt failed.  It
does not support mzXML files in a .tgz archive so that won't work as
you found out.

Just based on the error message, try either of the following and see
if you get any further trying to visualize that particular spectrum:

1.  place your mzXML or mzML file as
c:/MSGFDB/output/orbivelos/A1A2merged.mzXML
2.  create a .dta file as
c:/MSGFDB/output/orbivelos/A1A2merged/A1A2merged.3.3.2.dta

If either don't work, which is a likely scenario, my guess is that the
created pep.xml file is writing some element or attribute that either
isn't strictly correct or doesn't conform to the expected format.  In
the error message about the .tgz archive, it's trying to look for a
.dta file in the archive with a base name A1A2merged but the .tgz
archive itself has a different base name A1A2mergedmzXML and that
mismatch indicates a problem with your created pep.xml file.

Check your pep.xml file.  I think you want the base_name attribute to
be something like:

msms_run_summary base_name=c:/MSGFDB/output/orbivelos/A1A2merged
search_summary base_name=./A1A2merged

Good luck

- Jimmy

On Thu, Aug 2, 2012 at 10:00 AM, Jesse jgme...@ucsd.edu wrote:
 Hi everyone,

 I am trying to trick TPP into letting me use it to view annotated spectra
 from a currently unsupported search algorithm, MS-GFDB (which is an
 excellent tool by the way).  I was able to use a script I found online,
 msgfdb2pepxml.py, to make the pepxml file and it does load using the TPP
 pepXML viewer, but when I try to click on the ions to see a spectra I get
 the following message:

 command c:\Inetpub\wwwroot\..\tpp-bin\tar.exe -tzf
 c:/MSGFDB/output/orbivelos/A1A2mergedmzXML.tgz
 *A1A2merged.3.3.2.dta  /dev/null failed: Operation not permitted
 Error - cannot read spectrum; tried direct .dta, from mzXML/mzData and from
 .tgz

 First of all, my data is not in a .tgz file so I am confused why it assumes
 this.  Second, even if I put the mzXML into that directory as a .tgz
 compressed file, it fails.  Do I need to use dta files for this feature?
 Otherwise does anyone know a good method for quickly viewing annotated
 spectra?

 Thanks in advance!  Best regards,
 Jesse

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/spctools-discuss/-/Jqh4daWrBVoJ.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Making a subset of spectra from an mzXML file in Python

2012-06-14 Thread Jimmy Eng
Ben,

Have you done anything special to handle the scan numbers (which
presumably are not consecutive anymore starting from scan 1) and the
scan index?  If not, address those and re-test or find out if those
are important for MSGF-db.

On Thu, Jun 14, 2012 at 10:02 AM, Ben Temperton btemper...@gmail.com wrote:
 Hi there,

 I am trying to pull out a subset of data from an mzXML file to run against a
 database using MSGF-db (for instance, to re-run any non matching spectra
 against the database searching for phosphorylation). To generate the subset
 I am currently using:

 import lxml.etree as le
 SASHIMI_NAMESPACE =
 'http://sashimi.sourceforge.net/schema_revision/mzXML_3.1'

 def makeHQSpectraFile(spectraFile, spectraList, outputFile):
     Takes a spectra file, a list of scan ids to include and an output
 file as parameters
     with open(spectraFile,'r') as f:
         doc=le.parse(f)
         root = doc.getroot()
         for elem in doc.xpath('/t:mzXML/t:msRun/t:scan', namespaces={'t' :
 SASHIMI_NAMESPACE}):
             if not elem.attrib['num'] in spectraList:
                 parent=elem.getparent()
                 parent.remove(elem)
         for elem in doc.xpath('/t:mzXML/t:index/t:offset', namespaces={'t' :
 SASHIMI_NAMESPACE}):
             if not elem.attrib['id'] in spectraList:
                 parent=elem.getparent()
                 parent.remove(elem)
     handle = open(outputFile, 'wb')
     handle.write(le.tostring(doc) + '\n')
     handle.close()

 However, when I run MSGF-db on the new file it throws a:

 Reading spectra...
 javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1]
 Message: Premature end of file.
 at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(Unknown
 Source)
 at
 org.systemsbiology.jrap.stax.IndexParser.parseIndexes(IndexParser.java:176)
 at
 org.systemsbiology.jrap.stax.MSXMLParser.randomInits(MSXMLParser.java:117)
 at org.systemsbiology.jrap.stax.MSXMLParser.init(MSXMLParser.java:134)
 at parser.MzXMLSpectraMap.init(MzXMLSpectraMap.java:39)
 at parser.MzXMLSpectraIterator.init(MzXMLSpectraIterator.java:36)
 at parser.MzXMLSpectraIterator.init(MzXMLSpectraIterator.java:26)
 at ui.MSGFDB.runMSGFDB(MSGFDB.java:269)
 at ui.MSGFDB.runMSGFDB(MSGFDB.java:106)
 at ui.MSGFDB.main(MSGFDB.java:82)

 Whilst the original (non-parsed version) works fine. I can't get the
 mzXMLValidator to work on our systems (see post
 here https://groups.google.com/d/msg/spctools-discuss/bAxu-In-ju4/z9_g3mdWSFcJ),
 so I was wondering if anyone else had ever encountered a similar issue and
 had any tips.

 Many thanks,

 Ben

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/spctools-discuss/-/psC3ABG8sNcJ.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] How to make X!Tandem disregards charge state in mzXML file

2012-04-27 Thread Jimmy Eng
Try re-generating the index at the end of the mzXML file after you
edit out the charge state.  The TPP tools are failing to parse your
mzXML because the scan index is no longer valid.  You can re-generate
the index using indexmzXML which is distributed with the TPP
(c:\inetpub\tpp-bin\indexmzXML.exe).  Simply run it as:

indexmzXML.exe yourfile.mzXML

Tandem doesn't make use of the scan index and simply reads the mzXML
from start to finish which is why it has no problems reading your
edited file.

On Fri, Apr 27, 2012 at 7:40 AM, polia...@med.umich.edu
polia...@med.umich.edu wrote:
 Hi,
 I am analyzing some old data from Velos that was acquired with charge
 state screening during acquisition, i.e. charge state is determined
 and stored in .raw file. However, we now want to search this data
 without pre-determined charge state since it is often not accurately
 assigned (this is just Velos without Orbi). Is there a way to make X!
 Tandem disregard charge state in an mzXML? Just removing charge state
 field in mzXML works for X!Tandem but we later have issues with TPP as
 it fails to parse XML,
 Thanks,
 Anton.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] question about unconventional triplex SILAC data processing

2012-04-04 Thread Jimmy Eng
as far as I know, the TPP quantification tools currently do not
support triplex SILAC analysis.

On Wed, Apr 4, 2012 at 2:58 AM, zeyu sun szy3...@gmail.com wrote:
 Dear TPP team,
 I wondering if I can get help from you to process one of my SILAC data set.
 My experiment design is a SILAC-like scheme. But unlike the traditional
 SILAC design which contain L-Arg, L-Lys (no isotopes) as the light label,
 and H-Arg(Arg10, with 13C6 15N4) and H-Lys( Lys8 with 13C6 15N2) as the
 heavy label, my triplex design contain L-Arg, L-Lys  (no isotopes) as the
 light label, M-Arg(Arg4, with 15N4) and M-Lys( Lys2 with  15N2) as the
 medium label, H-Arg(Arg10, with 13C6 15N4) and H-Lys( Lys6 with 13C6) as the
 heavy label. I noticed in MS scan spectra, most medium labeled peaks are
 mixed with isotopic peaks from the light labeled precursors, the M and L
 peaks are just 2 or 4 Da away. Is there any way I can deconvolute those
 peaks before database (X!tandem or Mascot) search when I convert .RAW to
 mzXML? does the subsequent TPP workflow support those data format?
 thank you very much!

 SunSun

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Semi tryptic searches in X!Tandem?

2012-03-07 Thread Jimmy Eng
here's the parameter you want:

http://www.thegpm.org/tandem/api/pcsemi.html

On Wed, Mar 7, 2012 at 2:49 AM, Amit Yadav amit007thech...@gmail.com wrote:
 Hi

 Does anyone know how to conduct semi-tryptic searches in x!tandem(native or
 k-score)?

 I know about the refinement model but that limits the subsequent searches to
 only those proteins identified in first pass. Is there a way around or am I
 overloooking/missing something?

 Regards,

 Amit Kumar Yadav
 Senior Research Fellow (SRF-CSIR)
 IGIB, New Delhi (India)

 MassWiz Web server
 MassWiz sourceforge project
 MassWiki

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] 2 libra questions

2012-02-29 Thread Jimmy Eng
Rich,

Since no one else has responded, I'm staring at iTRAQ data I just
processed through Libra today and the intensities reported in the
pepXML file are exactly those that are in the corresponding mzXML.  I
have never seen data where this is not the case which means something
went wrong somewhere for you.

Regarding your second point, Libra expects the reporter ions to be in
the same spectrum/scan as was used to identify the peptide.  So
notions of HCD-CID pairs of spectra won't work directly with Libra as
it knows nothing of that type of relationship within your spectral
data.  Folks here so the same analysis you're doing so we had to put
together a simple little tool which merged these scans together and
spit out merged mgf and mzXML files that we use to search and process
through Libra.  Someone in your lab is already using this but feel
free to email Eva if you want to try the tool yourself; I'm sure
she'll be happy to give you a quick tutorial if you want to stop by.

  https://proteomicsresource.washington.edu/MassSpecUtils.php


On Mon, Feb 27, 2012 at 11:11 AM, greener greener_98...@yahoo.com wrote:
 Hi everyone, I have two questions about the iTRAQ quantitation by
 libra.

 (1)  Why are the iTRAQ label intensities of peptides in the Libra
 output files different from the intensities in the native ms/ms
 spectrum?
 (2)  Suppose a single precursor ion triggers two MS/MS scans (two HCD-
 CID pair). However only one out of the two HCD-CID scans enabled the
 peptide and corresponding protein identification, then does Libra uses
 that particular scan (HCD) for iTRAQ quantitation?
 Since the Libra output file does not give any information about
 retention time or scan # for the individual quantified peptides, it is
 extremely difficult to know which HCD scan (out of many for a
 precursor ion of a peptide) was used to do the measurement.

 Any thoughts folks have on this is appreciated. Thanks!
 -Rich

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] running xtandem in parallel on the same fasta file

2012-02-10 Thread Jimmy Eng
Shaun,

Multiple processing reading the same fasta file off of an NFS
partition has never really been a performance issue that I've ever
noticed through the years on different clusters (even pushing a
hundred processes accessing the same fasta).  If this really is a
problem for you, the obvious solution is to store the database on
local disk on each node; I can't imagine any other 'simple' fix.

On Fri, Feb 10, 2012 at 9:46 AM, naturofix shaungarn...@gmail.com wrote:
 I've installed TPP 4.5.1 on a linux computer cluster, which enables
 parallel processing of numerous files.

 Unfortunately this does not work with xtandem since all instances are
 trying to access the same fasta file.
 Only very few of the processes can run at any one time.

 Is there a simple way around this.

 The cluster uses a NFS file system, which cannot be changed at the
 moment.

 Thanks
 Shaun

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] does XPRESS support fixed and variable modifications?

2012-02-10 Thread Jimmy Eng
Cyrus,

For XPRESS, minimally you just need to specify:
- which residues are labeled
- the mass difference between the light and heavy forms of the label

We have the precursor m/z of the identified peptide and can calculate
the precursor m/z of the isotopic pair just by knowing which residues
are labeled and what the mass difference is between the light and
heavy label.

So I think you've got it right.  K as the labeled residue and
6.02012902 as the mass difference.  The notion of needing info on the
fixed and variable modifications used to perform the search just are
not pertinent for quantification by XPRESS.

- Jimmy

On Fri, Feb 10, 2012 at 12:09 PM, Cyrus Khambatta
xmasevenoo...@gmail.com wrote:
 Hi,

 I have a quick question about quantificaiton of SILAC samples using 13-
 C6-Lysine as my heavy isotope.
 I also tried reading through previous threads, and couldn't find a
 direct answer to this question.

 My questions:
 (1) Does XPRESS support either fixed or variable modifications?
 (2) I can see that ASAPRatio does, but I can't get that tool to work
 properly.  What am I doing wrong?


 FOR XPRESS ANALYSIS

 Fixed Modifications:
 Carbamidomethylation(C), Monoisotopic mass = 160.0306487

 Variable Modifications:
 Oxidized Methionine(M), Monoisotopic mass = 147.035
 Pyroglutamic Acid(E), Monoisotopic mass = 112.0160439

 I'm using 13-C6-Lysine as my heavy isotope, Monoisotopic mass =
 134.115092

 The settings I'm using in XPRESS:

 Change XPRESS mass tolerance: 0.1
 Change XPRESS residue mass difference: K, 6.02012902

 How do I input the fixed and/or variable modifications listed above?

 FOR ASAP RATIO ANALYSIS

 Change labeled residues to: K
 m/z range to include in summation of peak: 0.05
 Specified residue mass 1: M, 147.035
 Specified residue mass 2: C, 160.0306487
 Specified residue mass 3: E, 112.0160439

 Results:
 (1) XPRESS quantifies a ratio of heavy/light and gives me a ratio,
 however this analysis does not take into account any fixed or variable
 modifications discussed above.
 (2) ASAPRatio result is always -1 +/- -1 and no chromatogram is
 displayed when I click on that link.

 Any ideas what I may be doing wrong?

 Thank you VERY MUCH for your help,
 Cyrus

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

2011-10-26 Thread Jimmy Eng
Eileen,

I see the problem but I don't have a fix for your issue with that CGI
program that results from clicking on the protein xpress ratio.
There's a process that translates XML data to the resulting web page
that you were hoping to see when you click that link.  The TPP uses a
program xsltproc to transform a pep.xml file to list out the search
entries that were quantified for a particular protein.  On my linux
box, it took its time but ran through fine on your data.  On my
windows PC, that xsltproc process used up to 2GB of memory and failed
with an out of memory error.  I don't know if this out of memory error
is due to my 32-bit PC being low on memory (it has just 3GB of RAM) or
if it hit a 2GB memory cap for 32-bit applications.

At this point, I guess I'd encourage you to simply use ASASPRatio.  Or
attempt to run the TPP on a 64-bit box and possibly patch
c:\inetpub\tpp-bin\xsltproc.exe to access up to 4GB ram (see
http://bit.ly/o3fJTo).  I don't know if the latter will work or not as
I don't use 64-bit Windows here.

This belongs in the developers forum but anyone know if there's a good
alternative to xsltproc?  I was able to manually do the transformation
using Xalan and memory usage was much less but it was slow.  The
underlying pep.xml file being transformed is 338MB which caused xerces
to use up to 2GB.  On my linux box, xerces consumed 2.1GB when it ran
to completion; it failed at just past 2.05GB on Windows.

- Jimmy

On Thu, Oct 20, 2011 at 2:14 PM, Eileen Yue y...@ohsu.edu wrote:
 Hi Jimmy:
 After I tested one fraction, I run TPP for my 12 fraction together to analyze 
 the xpress and Asap ratio. They run smoothly and completed after two days.

 There is similar problem like before but without any error information: I 
 clicked the protein xpress ratio to see the all the peptide xpress ratio 
 information, they only showed:

 -- TPP v4.5 RAPTURE rev 0, Build 201109211427 (MinGW) --
 XPressCGIProteinDisplay (TPP v4.5 RAPTURE rev 0, Build 201109211427 (MinGW))
 XPRESS ratio in ProteinProphet: light:heavy  (1.29 ± 0.46)  
 sp|A0AV96|RBM47_HUMAN 6 entries used                                 1.29:1.00
                                 1.00:0.77
 c:/Inetpub/wwwroot/ISB/data/ISL1026/isl_er/interact.pep.xml
 
 Click RELOAD in your browser to re-evaluate ratio if changes have been made 
 to the XPRESS quantitation in any of the entries above. This will update what 
 you see in the current ratio just above.
 To accept the current ratio back to ProteinProphet, click on the button below.


 But when I checked the asap ratio, everything is fine.

 Do you know how I solve this problem?

 Thank you
 Eileen


 
 From: spctools-discuss@googlegroups.com [spctools-discuss@googlegroups.com] 
 On Behalf Of Eileen Yue [y...@ohsu.edu]
 Sent: Tuesday, October 18, 2011 3:22 PM
 To: spctools-discuss@googlegroups.com
 Subject: RE: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

 Hi Jimmy:
 Thank you so much and it works! I just tried one fraction and it works 
 perfectly!

 So appreciate all of you for your kind help!

 Eileen
 
 From: spctools-discuss@googlegroups.com [spctools-discuss@googlegroups.com] 
 On Behalf Of Jimmy Eng [jke...@gmail.com]
 Sent: Tuesday, October 18, 2011 1:41 PM
 To: spctools-discuss@googlegroups.com
 Subject: Re: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

 Eileen,

 Can you download the following executable and update your TPP binary
 with this one:

 http://proteomicsresource.washington.edu/dist/XPressPeptideUpdateParser.cgi

 The files should go in c:\inetpub\tpp-bin\.  Let me know if this fix
 does not work for the problem you're experiencing.

 - Jimmy

 On Tue, Oct 18, 2011 at 10:51 AM, Eileen Yue y...@ohsu.edu wrote:
 Hi Joe:
 I did convert mzml files. the problem is that the error showed up when I 
 click the xpress ratio in prot.xml files. If I open the pep.shtml and click 
 the xpress ratio, it is fine and showed me the chromatrography information 
 for this peptide. This will not work if I open the prot.shtml and click the 
 xpress ratio. The ASAP ratio is fine and I can open them and see the 
 chromatography information.

 Thanks
 Eileen

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from

Re: [spctools-discuss] Re: Problem with XPressPeptideParser

2011-10-21 Thread Jimmy Eng
Thank Patrick Pedrioli; he was the one who fixed the bug!

On Fri, Oct 21, 2011 at 10:59 AM, Ping yanpp...@gmail.com wrote:
 Thank you s much Jimmy. It works now!!

 On Oct 20, 2:18 pm, Jimmy Eng jke...@gmail.com wrote:
 There's been a fix checked in to the Sashimi subversion repository
 that addresses a known issue with XPressPeptideParser.  Hopefully that
 fix addresses the issue you're seeing.  If not, let me know.

 Since you're using linux, build XPressPeptideParser from either the
 trunk or branch4-5 code.

 On Thu, Oct 20, 2011 at 2:04 PM, Ping yanpp...@gmail.com wrote:
  Hi,

  I installed tpp 4.5 on a Ubuntu 10.04 system. While I run
  XPressPeptideParser on either XTandem/OMSSA serach result pep.xml, I
  got this error:

  *** glibc detected *** /usr/local/src/trans_proteomic_pipeline/build/
  Ubuntu-x86_64/XPressPeptideParser: free(): invalid next size (fast):
  0x01b25640 ***
  === Backtrace: =
  /lib/libc.so.6(+0x775b6)[0x7f8c406735b6]
  /lib/libc.so.6(cfree+0x73)[0x7f8c40679e83]
  /usr/local/src/trans_proteomic_pipeline/build/Ubuntu-x86_64/
  XPressPeptideParser[0x41af70]
  /usr/local/src/trans_proteomic_pipeline/build/Ubuntu-x86_64/
  XPressPeptideParser[0x43f1bf]
  ...
  /lib/libc.so.6(__libc_start_main+0xfd)[0x7f8c4061ac4d]
  /usr/local/src/trans_proteomic_pipeline/build/Ubuntu-x86_64/
  XPressPeptideParser[0x405ef9]

  I run  XPressPeptideParser on total 32 files, and 10 of them got the
  error. The others are fine. I debug the code, and it seems it is
  something with mzXML file.

  Can anyone help?

  Thanks,

  Ping

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to spctools-discuss@googlegroups.com.
  To unsubscribe from this group, send email to 
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

2011-10-21 Thread Jimmy Eng
Eileen,

I don't see this problem with my test dataset but that doesn't mean
much because it's a different dataset running on a different system.
If you send me your files (zip up all of the interact* files and email
them to me), I'll take a look.  That XPressCGIProteinDisplay isn't
something I wrote so have low expectations that I can identify and fix
the problem you're seeing. :)

- Jimmy

On Thu, Oct 20, 2011 at 2:14 PM, Eileen Yue y...@ohsu.edu wrote:
 Hi Jimmy:
 After I tested one fraction, I run TPP for my 12 fraction together to analyze 
 the xpress and Asap ratio. They run smoothly and completed after two days.

 There is similar problem like before but without any error information: I 
 clicked the protein xpress ratio to see the all the peptide xpress ratio 
 information, they only showed:

 -- TPP v4.5 RAPTURE rev 0, Build 201109211427 (MinGW) --
 XPressCGIProteinDisplay (TPP v4.5 RAPTURE rev 0, Build 201109211427 (MinGW))
 XPRESS ratio in ProteinProphet: light:heavy  (1.29 ± 0.46)  
 sp|A0AV96|RBM47_HUMAN 6 entries used                                 1.29:1.00
                                 1.00:0.77
 c:/Inetpub/wwwroot/ISB/data/ISL1026/isl_er/interact.pep.xml
 
 Click RELOAD in your browser to re-evaluate ratio if changes have been made 
 to the XPRESS quantitation in any of the entries above. This will update what 
 you see in the current ratio just above.
 To accept the current ratio back to ProteinProphet, click on the button below.


 But when I checked the asap ratio, everything is fine.

 Do you know how I solve this problem?

 Thank you
 Eileen


 
 From: spctools-discuss@googlegroups.com [spctools-discuss@googlegroups.com] 
 On Behalf Of Eileen Yue [y...@ohsu.edu]
 Sent: Tuesday, October 18, 2011 3:22 PM
 To: spctools-discuss@googlegroups.com
 Subject: RE: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

 Hi Jimmy:
 Thank you so much and it works! I just tried one fraction and it works 
 perfectly!

 So appreciate all of you for your kind help!

 Eileen
 
 From: spctools-discuss@googlegroups.com [spctools-discuss@googlegroups.com] 
 On Behalf Of Jimmy Eng [jke...@gmail.com]
 Sent: Tuesday, October 18, 2011 1:41 PM
 To: spctools-discuss@googlegroups.com
 Subject: Re: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

 Eileen,

 Can you download the following executable and update your TPP binary
 with this one:

 http://proteomicsresource.washington.edu/dist/XPressPeptideUpdateParser.cgi

 The files should go in c:\inetpub\tpp-bin\.  Let me know if this fix
 does not work for the problem you're experiencing.

 - Jimmy

 On Tue, Oct 18, 2011 at 10:51 AM, Eileen Yue y...@ohsu.edu wrote:
 Hi Joe:
 I did convert mzml files. the problem is that the error showed up when I 
 click the xpress ratio in prot.xml files. If I open the pep.shtml and click 
 the xpress ratio, it is fine and showed me the chromatrography information 
 for this peptide. This will not work if I open the prot.shtml and click the 
 xpress ratio. The ASAP ratio is fine and I can open them and see the 
 chromatography information.

 Thanks
 Eileen

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Problem with XPressPeptideParser

2011-10-20 Thread Jimmy Eng
There's been a fix checked in to the Sashimi subversion repository
that addresses a known issue with XPressPeptideParser.  Hopefully that
fix addresses the issue you're seeing.  If not, let me know.

Since you're using linux, build XPressPeptideParser from either the
trunk or branch4-5 code.


On Thu, Oct 20, 2011 at 2:04 PM, Ping yanpp...@gmail.com wrote:
 Hi,

 I installed tpp 4.5 on a Ubuntu 10.04 system. While I run
 XPressPeptideParser on either XTandem/OMSSA serach result pep.xml, I
 got this error:

 *** glibc detected *** /usr/local/src/trans_proteomic_pipeline/build/
 Ubuntu-x86_64/XPressPeptideParser: free(): invalid next size (fast):
 0x01b25640 ***
 === Backtrace: =
 /lib/libc.so.6(+0x775b6)[0x7f8c406735b6]
 /lib/libc.so.6(cfree+0x73)[0x7f8c40679e83]
 /usr/local/src/trans_proteomic_pipeline/build/Ubuntu-x86_64/
 XPressPeptideParser[0x41af70]
 /usr/local/src/trans_proteomic_pipeline/build/Ubuntu-x86_64/
 XPressPeptideParser[0x43f1bf]
 ...
 /lib/libc.so.6(__libc_start_main+0xfd)[0x7f8c4061ac4d]
 /usr/local/src/trans_proteomic_pipeline/build/Ubuntu-x86_64/
 XPressPeptideParser[0x405ef9]

 I run  XPressPeptideParser on total 32 files, and 10 of them got the
 error. The others are fine. I debug the code, and it seems it is
 something with mzXML file.

 Can anyone help?

 Thanks,

 Ping

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

2011-10-18 Thread Jimmy Eng
Eileen,

Thanks for documenting this problem; I just committed a fix to address
the issue.  I'll send you a link to download a replacement windows
binary in a separate email offlist as soon as I get a chance to build
it on my windows box.

- Jimmy

On Tue, Oct 18, 2011 at 10:51 AM, Eileen Yue y...@ohsu.edu wrote:
 Hi Joe:
 I did convert mzml files. the problem is that the error showed up when I 
 click the xpress ratio in prot.xml files. If I open the pep.shtml and click 
 the xpress ratio, it is fine and showed me the chromatrography information 
 for this peptide. This will not work if I open the prot.shtml and click the 
 xpress ratio. The ASAP ratio is fine and I can open them and see the 
 chromatography information.

 Thanks
 Eileen

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] ASAP Ratiopeptideparse.exe-application error

2011-10-18 Thread Jimmy Eng
Eileen,

Can you download the following executable and update your TPP binary
with this one:

http://proteomicsresource.washington.edu/dist/XPressPeptideUpdateParser.cgi

The files should go in c:\inetpub\tpp-bin\.  Let me know if this fix
does not work for the problem you're experiencing.

- Jimmy

On Tue, Oct 18, 2011 at 10:51 AM, Eileen Yue y...@ohsu.edu wrote:
 Hi Joe:
 I did convert mzml files. the problem is that the error showed up when I 
 click the xpress ratio in prot.xml files. If I open the pep.shtml and click 
 the xpress ratio, it is fine and showed me the chromatrography information 
 for this peptide. This will not work if I open the prot.shtml and click the 
 xpress ratio. The ASAP ratio is fine and I can open them and see the 
 chromatography information.

 Thanks
 Eileen

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] convert to mzxml from raw fails

2011-10-13 Thread Jimmy Eng
I would hazard to guess that this spctools-discuss post is relevant
for you:  http://bit.ly/ogXwK0

2011/10/13 Goran Mitulović gox...@gmail.com:
 Hello,
 I am trying to convert a Thermo .raw file to mzxml but the operation fails
 with the following message:

 # Commands for session CHRRESHT3 on Thu Oct 13 10:50:16 2011
 # BEGIN COMMAND BLOCK
 ## BEGIN Command Execution ##
 [Thu Oct 13 10:50:16 2011] EXECUTING: c:\Inetpub\tpp-bin\ReAdW -v --mzXML
 c:/Inetpub/wwwroot/ISB/data/Brain/Hirn_IA_6hrs.raw
 OUTPUT:
 The system cannot find message text for message number 0x2331 in the message
 file for Application.
 END OUTPUT
 RETURN CODE:256
 ## End Command Execution ##
 # All finished at Thu Oct 13 10:50:16 2011
 # END COMMAND BLOCK

 This happens after I have installed TPP v4.5 RAPTURE rev 0, Build
 201109211427 (MinGW).
 Cam somebody help?

 --
 Goxy

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] XPRESS wrongly assigns heavy and light for some peptides

2011-09-06 Thread Jimmy Eng
Oliver,

If you can make a dataset available (ideally a single raw file in xml
format and corresponding pep.xml), I can take a look.

- Jimmy

On Tue, Sep 6, 2011 at 5:11 AM, oschill...@gmail.com
oschill...@gmail.com wrote:
 Dear all,

 in some cases, XPRESS wrongly assigns heavy and light states for
 some peptides. This results of course in inverted, wrong ratios.
 heavy and light are correctly assigned by ASAP and verified by
 checking with Pep3d.
 We are using differential formaldehyde labeling. This problem occurs
 with both static and variable modifications for  heavy and light
 dimethylation. It occurs for a few peptides (not all), we cannot see
 any regularity.

 I would greatly appreciate some assistance!

 Thanks

 Oliver

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: XPRESS discrepancy between ProtXML and PepXML viewers and XPRESS viewer Options

2011-08-07 Thread Jimmy Eng
Yes, it'll be implemented some day sooner than later just to avoid
this confusion for future users.  Sorry you have to deal with the
mismatch ratios for the time being.

On Sat, Aug 6, 2011 at 8:15 PM, Oded oded.kleif...@gmail.com wrote:
 Hi Jimmy,
 Yes indeed it is the 13C isotope peaks option (-p). If I set it to 0
 everything seems to be OK.
 Are there any plans implement it in the XPressPeptideUpdateParser.cgi
 as well?
 Thanks,
 Oded

 On Aug 6, 4:30 am, Jimmy Eng jke...@gmail.com wrote:
 Oded,

 The Xpress values you see in the pep.xml file, which means those
 viewed in the PepXML Viewer, are those that are most correct based on
 your run settings.  What you view in the XPressPeptideUpdateParser.cgi
 ideally should correspond exactly to the same numbers but they don't
 in the one case mentioned previously.  As discussed in this thread,
 the option to sum signal from 13C isotope peaks was never implemented
 in the XPressPeptideUpdateParser.cgi.  So this cgi currently doesn't
 sum isotope peaks in reconstructing chromatograms which accounts for
 the ratio differences if Xpress was run with that option turned on.
 If you don't use that option, I would hope that the Xpress peptide
 ratios that you see are exactly the same across the various tools.  If
 this is not the case, tell me what options/settings you used in
 running Xpress and I'll investigate.

 I didn't test all possible settings but I did just run TPP 4.4.1
 Xpress and can confirm that peptide ratios in PepXML Viewer are
 exactly the same as shown in XPressPeptideUpdateParser.cgi.

 - Jimmy







 On Wed, Aug 3, 2011 at 10:27 PM, Oded oded.kleif...@gmail.com wrote:
  Dear all,
  I am using TPP for SILAC analysis of Obritrap X!tandem data (with K+8/R
  +10).
  I noticed some differences between the peptide Xpress values shown
  thorough ProtXML viewer (XPressCGIProteinDisplayParser.cgi) and pepXML
  viewer (PepXMLViewer.cgi) and those that appear in
  XPressPeptideUpdateParser.cgi (which I assume are the correct ones).
  These differences are usually not that big (i.e 1-5%) in some cases
  can be totally off (i.e 0.1 vs 0.25).
  I have got similar outputs with TPP 4.4.1 on a Mac (OS 10.6) and Win
  XP.
  I should mention that I run it all through the gui.
  Any idea how to overcome it?
  Many thanks,
  Oded

  -- Forwarded message --
  From: Jimmy Eng jke...@gmail.com
  Date: Dec 21 2010, 8:16 am
  Subject: XPRESS discrepancy between PepXML viewer and XPRESS viewer
  To: spctools-discuss

  Oliver,

  I finally had a chance to revert to 4.3.1 on two machines (linux 
  windows desktop), runXPRESSon an old ICAT dataset, and view the
  ratios using new 4.4.1 XPressUpdateParser.cgi.  On both systems I
  don't see the inconsistent ratios being reported for this dataset.
  Then I found some Orbi SILAC datasets which were run under 4.3.1.
  Viewing the ratios  chromatograms using the current 4.4.1 cgi viewer
  shows the exact same ratios as calculated by 4.3.1XPRESS.

  At this point, I can't replicate the discrepancy you're seeing.  My
  advice would be to run your analysis again and see if the discrepancy
  remains.  If you still see the problem, isolate a small dataset
  (single lcms run) and send it to me (mzXML, pep.xml) along with
  yourXPRESSrun parameters.

  - Jimmy

  On Mon, Dec 13, 2010 at 10:02 AM, oschill...@gmail.com

  oschill...@gmail.com wrote:
  Sorry for my late reply: the discrepancy occurs when viewing a TPP 4.3
  analysis with TPP 4.4. We have now rolled back to TPP 4.3 to keep our
 XPRESSanalysis consistent. Any advice on how to proceed in the
  future?

  Thanks

  Oliver

  On Nov 23, 5:46 pm, Jimmy Eng jke...@gmail.com wrote:
  Oliver,

  What parameters did you use to runXPRESS?  The GUI showing elution
  profiles has no current support for the isotope option (summed
  intensities of first N isotope peaks) but otherwise should return the
  same ratios as that shown in the pepXML file.

  - Jimmy

  On Mon, Nov 22, 2010 at 5:09 AM, oschill...@gmail.com

  oschill...@gmail.com wrote:
   Dear TPP community,

   we notice a small discrepancy between theXPRESSvalues displayed in
   the PepXML viewer tab (table withpeptidesequences etc) and the
  XPRESStab (graphic display of elution peak). For almost all peptides,
   we observe slightly differentXPRESSvalues in both tabs. For example,
   apeptidehas anXPRESSvalue of 2.32:1 in the PepXML viewer and
   2.27:1 in theXPRESSviewer.

   Our impression is that this discrepancy occurs as of TPP 4.4.1 and
   does not occur for TPP 4.3.x.

   Can anyone please advice us on how to proceed here?

   Thanks a lot

   Oliver

   --
   You received this message because you are subscribed to the Google 
   Groups spctools-discuss group.
   To post to this group, send email to spctools-discuss@googlegroups.com.
   To unsubscribe from this group, send email to 
   spctools-discuss+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com

Re: [spctools-discuss] XPRESS discrepancy between ProtXML and PepXML viewers and XPRESS viewer Options

2011-08-05 Thread Jimmy Eng
Oded,

The Xpress values you see in the pep.xml file, which means those
viewed in the PepXML Viewer, are those that are most correct based on
your run settings.  What you view in the XPressPeptideUpdateParser.cgi
ideally should correspond exactly to the same numbers but they don't
in the one case mentioned previously.  As discussed in this thread,
the option to sum signal from 13C isotope peaks was never implemented
in the XPressPeptideUpdateParser.cgi.  So this cgi currently doesn't
sum isotope peaks in reconstructing chromatograms which accounts for
the ratio differences if Xpress was run with that option turned on.
If you don't use that option, I would hope that the Xpress peptide
ratios that you see are exactly the same across the various tools.  If
this is not the case, tell me what options/settings you used in
running Xpress and I'll investigate.

I didn't test all possible settings but I did just run TPP 4.4.1
Xpress and can confirm that peptide ratios in PepXML Viewer are
exactly the same as shown in XPressPeptideUpdateParser.cgi.

- Jimmy

On Wed, Aug 3, 2011 at 10:27 PM, Oded oded.kleif...@gmail.com wrote:
 Dear all,
 I am using TPP for SILAC analysis of Obritrap X!tandem data (with K+8/R
 +10).
 I noticed some differences between the peptide Xpress values shown
 thorough ProtXML viewer (XPressCGIProteinDisplayParser.cgi) and pepXML
 viewer (PepXMLViewer.cgi) and those that appear in
 XPressPeptideUpdateParser.cgi (which I assume are the correct ones).
 These differences are usually not that big (i.e 1-5%) in some cases
 can be totally off (i.e 0.1 vs 0.25).
 I have got similar outputs with TPP 4.4.1 on a Mac (OS 10.6) and Win
 XP.
 I should mention that I run it all through the gui.
 Any idea how to overcome it?
 Many thanks,
 Oded


 -- Forwarded message --
 From: Jimmy Eng jke...@gmail.com
 Date: Dec 21 2010, 8:16 am
 Subject: XPRESS discrepancy between PepXML viewer and XPRESS viewer
 To: spctools-discuss


 Oliver,

 I finally had a chance to revert to 4.3.1 on two machines (linux 
 windows desktop), runXPRESSon an old ICAT dataset, and view the
 ratios using new 4.4.1 XPressUpdateParser.cgi.  On both systems I
 don't see the inconsistent ratios being reported for this dataset.
 Then I found some Orbi SILAC datasets which were run under 4.3.1.
 Viewing the ratios  chromatograms using the current 4.4.1 cgi viewer
 shows the exact same ratios as calculated by 4.3.1XPRESS.

 At this point, I can't replicate the discrepancy you're seeing.  My
 advice would be to run your analysis again and see if the discrepancy
 remains.  If you still see the problem, isolate a small dataset
 (single lcms run) and send it to me (mzXML, pep.xml) along with
 yourXPRESSrun parameters.

 - Jimmy

 On Mon, Dec 13, 2010 at 10:02 AM, oschill...@gmail.com







 oschill...@gmail.com wrote:
 Sorry for my late reply: the discrepancy occurs when viewing a TPP 4.3
 analysis with TPP 4.4. We have now rolled back to TPP 4.3 to keep our
XPRESSanalysis consistent. Any advice on how to proceed in the
 future?

 Thanks

 Oliver

 On Nov 23, 5:46 pm, Jimmy Eng jke...@gmail.com wrote:
 Oliver,

 What parameters did you use to runXPRESS?  The GUI showing elution
 profiles has no current support for the isotope option (summed
 intensities of first N isotope peaks) but otherwise should return the
 same ratios as that shown in the pepXML file.

 - Jimmy

 On Mon, Nov 22, 2010 at 5:09 AM, oschill...@gmail.com

 oschill...@gmail.com wrote:
  Dear TPP community,

  we notice a small discrepancy between theXPRESSvalues displayed in
  the PepXML viewer tab (table withpeptidesequences etc) and the
 XPRESStab (graphic display of elution peak). For almost all peptides,
  we observe slightly differentXPRESSvalues in both tabs. For example,
  apeptidehas anXPRESSvalue of 2.32:1 in the PepXML viewer and
  2.27:1 in theXPRESSviewer.

  Our impression is that this discrepancy occurs as of TPP 4.4.1 and
  does not occur for TPP 4.3.x.

  Can anyone please advice us on how to proceed here?

  Thanks a lot

  Oliver

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to spctools-discuss@googlegroups.com.
  To unsubscribe from this group, send email to 
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com

Re: [spctools-discuss] TPP install in centos (linux)

2011-07-27 Thread Jimmy Eng
Looks like you are missing libgd-devel and libgd.
Install with some command like:  yum install gd gd-devel


On Wed, Jul 27, 2011 at 7:14 AM, tiantian wenbos...@gmail.com wrote:
 Hi all:
       When I install linux(centos) , I got errors like below:
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:46:16: error: gd.h: No such file or
 directory
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:47:21: error: gdfonts.h: No such
 file or directory
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx: In function 'void MAKE_PLOT(int,
 int, int, int, int, int, int, double, double, double*, double*,
 double*, double*, char*, QuanStruct*)':
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:796: error: 'gdImagePtr' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:796: error: expected `;' before
 'gdImageLight'
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:803: error: 'gdImageLight' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:803: error: 'gdImageCreate' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:804: error: 'gdImageColorAllocate'
 was not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:811: error: 'gdImageHeavy' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:821: error: 'gdFontSmall' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:821: error: 'gdImageString' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:871: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:873: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:879: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:881: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:907: error: 'gdImageSetPixel' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:909: error: 'gdImageSetPixel' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:912: error: 'gdImageSetPixel' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:914: error: 'gdImageSetPixel' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:935: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:944: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:961: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:971: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:989: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:999: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1011: error: 'gdImageLine' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1018: error: 'gdImageRectangle' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1121: error: 'gdImageInterlace' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1126: error: 'gdImagePngPtr' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1134: error: 'gdImageDestroy' was
 not declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1144: error: 'gdImagePngPtr' was not
 declared in this scope
 Quantitation/XPress/XPressPeptideUpdateParser/
 XPressPeptideUpdateParserMain.cxx:1152: error: 'gdImageDestroy' was
 not declared in this scope

     Who can tell me how should I solve this problem?
     Best wishes!

 tiantian

 --
 You received this message because 

Re: [spctools-discuss] Converting Thermo (LTQ velos) to mzXML - adding precursor charges

2011-07-11 Thread Jimmy Eng
Greg, the precursorCharge attribute is only present for data where the
charge state is present in the raw file.  This usually is true for
FT/Orbi data but not normally with LTQ/Velos data.

Robert, the real question is what precursor charge would you want to
be associated with each ms/ms spectrum for your LTQ Velos spectra?
Since there's no precursor charge in the underlying RAW file, the
converters won't make them up. It's not inconceivable to implement
charge state guessing rules, like those implemented in Tandem, but I'm
not sure this is going to happen for many reasons.

On Mon, Jul 11, 2011 at 9:29 AM, Greg Bowersock bowers...@gmail.com wrote:
 Precursor charge is already in mzXML. If you look at the mzXML files created
 by ReAdW (msconvert should also have it), you should see lines such as:
 precursorMz precursorIntensity=2123.5 precursorCharge=4
 activationMethod=CID 1390.82/precursorMz
 Those lines are part of the msms scan information (ms level = 2). If the
 line isn't there, you might want to check the method on LTQ to make sure
 that Charge State Screening is enabled for the segment in the dependent scan
 settings.
 Greg

 On Mon, Jul 11, 2011 at 11:06 AM, Robert robert.wink...@bioprocess.org
 wrote:

 Hi,

 when converting Thermo (LTQ velos) raw files to mzXML, no precursor
 charges are included in the resulting mzXML. I tried ReAdW (TPP) and
 msconvert (Proteowizzard).

 This is not a problem for X!Tandem searches, but for some uses
 (calculation of mass errors, PRIDE XML repository), the precursor
 charge would be desirable.

 Any suggestions, how the charge could be added to mzXML?

 Best, Robert

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: analyze peptide by xinteract in TPP

2011-06-28 Thread Jimmy Eng
I'm sure David will update PeptideProphet to recognize the different
search_engine attribute that your pep.xml files contain.  But another
option for you if you want to use PeptideProphet right now is to
actually use the TPP's Sqt2XML program to translate your .sqt files to
.pep.xml.  It's apparent that you're using an mspire tool to do this
conversion now.

On Wed, Jun 22, 2011 at 5:28 PM, Kinfai Au kinfa...@gmail.com wrote:
 I find a function Sqt2XML in TPP-4.4.1, so I converted my SQT file
 to pepXML and started peptideprophet.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] How to set Xpress Options in TPP?

2011-05-04 Thread Jimmy Eng
I would set the Xpress mass tolerance to 1.0 for the LTQ data.

In Sequest, 'peptide mass tolerance' is the mass tolerance for the
calculated peptide mass, which is a function of MS1 precursor scan
accuracy, while 'fragment ion tolerance' is the mass tolerance applied
to the ms/ms fragment ions.  The values you're using are reasonable.
You might consider lowering the fragment mass tolerance to 0.5.  There
is a built-in minimum tolerance due to how the ms/ms peaks are
internally represented so I actually use a 0.0 fragment mass tolerance
in my searches but I don't use the commercial version from Thermo so I
don't know if this 0.0 value would behave the same for you (although
there is a decent chance it would).

On Wed, May 4, 2011 at 1:45 AM, 杨豫鲁 yuluwy...@gmail.com wrote:
 Dear :

 Hi,

 There is some question when I use  the  Xpress  from Trans-Proteomic
 Pipeline for analysis and validation of peptides and proteins ,which was
  labeled by SILAC, after searching the raw files through sequest( Bioworks
 Browser ,Thermo Fisher, San Jose, CA) against the NCBI RefSeq database of
 human sequences.

 For the  instrument we used was a  LC-MS/MS(LTQ,Thermo),the sensitivity of
 which is no good than  that of LTQ-FT or LTQ-Orbitrap.How to set the ‘
 Change Xpress mass tolerance'  in the Xpress Options ?  And what's the
 differense between it and the ' peptide tolerance'and 'Fragment ions
 tolerance' in the Sequest Search Parameters?  We set the peptide tolerance
 as 2.0 amu and Fragment ions tolerance as 1.0 amu , are they reasonable with
 regard to our  LTQ ?

 Please return as soon as posssible.

 All yours

 --
 Yulu Yang
 Ph.D Candidate
 School of Life Science and Biotechnology
 Shanghai Jiao Tong University
 P.R. China
 86-21-34204875(office)

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Digestion Program

2011-04-28 Thread Jimmy Eng
I checked in the updates digestdb sources yesterday to the Sashimi
SourceForge repository that will allow you to specify modified
residues.  You can download a compiled windows binary here:
  http://proteomicsresource.washington.edu/dist/digestdb.exe

The command line option to specify the mods is -M.  For example,
   digestdb.exe -M57.0@C,16.0@M file.fasta

On Tue, Apr 26, 2011 at 6:16 AM, Amit Yadav amit007thech...@gmail.com wrote:
 I am using digestdb.exe program that comes with TPP installation. I was
 wondering how to configure fixed modifications.

 Regards,

 Amit Kumar Yadav
 Senior Research Fellow (SRF-CSIR)
 IGIB, New Delhi (India)

 MassWiz Web server
 MassWiz sourceforge project
 MassWiki

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Digestion Program

2011-04-26 Thread Jimmy Eng
Amit,

There's no functionality to specify fixed modifications in that
program at this point.  But it's easy to do so I'll add the ability to
do so soon.

On Tue, Apr 26, 2011 at 6:16 AM, Amit Yadav amit007thech...@gmail.com wrote:
 I am using digestdb.exe program that comes with TPP installation. I was
 wondering how to configure fixed modifications.

 Regards,

 Amit Kumar Yadav
 Senior Research Fellow (SRF-CSIR)
 IGIB, New Delhi (India)

 MassWiz Web server
 MassWiz sourceforge project
 MassWiki

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] interact.pep.xml was not found on this server.

2011-04-13 Thread Jimmy Eng
Also look into the difference between 'Alias' and 'ScriptAlias' in
your apache setting below; I think you want 'Alias' for the document
(data) directory remapping as ScriptAlias is meant for directories
that contain CGI scripts.

If you want to store your results in /tpp/data/, consider mapping that
to webserverroot/tpp/data/ instead of webserverroot/data/ as I
think you're going to have problems with links in the tools when the
real path (/tpp/data) the webserver path (/data) are not consistent.

On Wed, Apr 13, 2011 at 10:37 AM, Joseph Slagel
joseph.sla...@systemsbiology.org wrote:
 Huan,

 What is the actual path and file permisions to the pep.shtml and prot.shtml
 files on your system?

 -Joe


 On Mon, Apr 11, 2011 at 10:42 AM, Huan Wang huanwan...@gmail.com wrote:

 Hi,

 I installed TPP v4.4 VUVUZELA rev 1, Build 201101181419 (linux) on
 Ubuntu 10.04-64bit. After running the tutorial data (raft5051) in
 peptide prophet, I got Command Successful message. However, when I
 click the link of output shtml, I got 404 Not Found, The requested
 URL /data/testinteract.pep.xml was not found on this server. I
 checked the path and both pep.shtml and prot.shtml are there. There
 are also corresponding .xml and .xsl files. I can open the generated
 pep.xml file but not prot.xml files (same error).

 My guess is I did not set up apache2 correctly because another copy of
 TPP runs on Windows can display the result without problem.

 My apache2 setting is: /etc/apache2/sites-available/mytpp:
 SetEnv WEBSERVER_ROOT /usr/local/tpp
        # directory to store data for web browser viewing
        ScriptAlias /tpp/data /usr/local/tpp/data
        Directory /usr/local/tpp/data
                AllowOverride None
                Options Indexes FollowSymLinks Includes
                Order allow,deny
                Allow from all
        /Directory

 My /usr/local/tpp/cgi-bin/tpp_gui setting is

 my $www_root    = readconfig('www_root',$ENV{'WEBSERVER_ROOT'}.'/');
 # full path to web server root
 my $data_dir    = readconfig('data_dir',${www_root}data/);

 They matched to each other. I also tried to change the path to /usr/
 local/tpp/ISB/data, but got the same thing.

 Can anyone help me out? Thanks

 Huan

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] MassWolf MSe data

2011-04-11 Thread Jimmy Eng
The problem is likely due to MSe spectra not being labeled as MS level
2 scans (ms/ms scans) in the mzXML file.  Look for msLevel=
attribute in the file to see what MS level is assigned to these
spectra.  If I had to guess, they are considered MS level 1 scans and
no distinction is made between low energy and high energy acquisition
modes.

The bigger but not quite related question is what do you hope to
accomplish by exporting MSe scans to mgf?  You will need a specialized
search engine like Waters IdentityE in order to analyze that type of
data containing fragments from mixtures of peptides.

I'll also throw in the obligatory consider use msconvert instead of
masswolf going forward.

On Mon, Apr 11, 2011 at 12:15 PM, hw hw080...@gmail.com wrote:
 I have trouble to convert Masslynx's MSe data file to mzXML then
 to .mgf file with TPP. Here is the command I used:
 masswolf --mzXML --nolockspray --MSe C:\data.raw C:\data.mzXML

 The output file data.mzXML can be viewed with InsilicoView and it
 looked ok. However, when TPP was used to convert .data.mzXML to .mgf,
 there was no peak in it. In MSe mode, low energy was set at 2 and high
 energy was set to 20-30V. I don't have much experience on Masslynx
 software. Is it possible that some settings in MSe mode was not
 correct? or is it possible masswolf requires some special settings?
 any comment is appreciated.

 By the way, I have no problem to convert masslynx's DDA data file to
 mzXML then to .mgf using TPP. Mascort search with this .mgf file
 yielded positive results. In processing DDA data, masswolf command was

 masswolf --mzXML --nolockspray C:\data.raw C:\data.mzXML

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Proteome Discoverer's .msf to TPP input files (pepXML)

2011-04-05 Thread Jimmy Eng
Christine,

As far as I know, such a converter does not exist at this point.
Hopefully someone can chime in if that's not the case.

- Jimmy

On Tue, Apr 5, 2011 at 7:33 PM, Christine Vogel vogel@gmail.com wrote:

 Hi All,

 Reposting Thomas' question from earlier -- what is a good converter
 from Thermo's .msf to TPP suitable files (i.e. pepXML)?

 I tried to find it on the list, and the discussion let me to here:

 https://github.com/itmat/thermo_msf/blob/master/README.rdoc#readme

 But I am unable to successfully install/use this.  Any help on this
 converter/installation?

 Thanks a lot,

 Christine


 -

 From: Thomas DV thomas.devijl...@ua.ac.be
 Date: Sun, 5 Dec 2010 08:43:28 -0800 (PST)
 Local: Sun, Dec 5 2010 12:43 pm
 Subject: proteome discoverer 1.1 to pepXML
 Reply to author | Forward | Print | Individual message | Show original
 | Report this message | Find messages by this author
 Hello everybody,

 I'm new to this group and in fact rather new to proteome data analysis
 as a whole.

 We are using Thermo's proteome discoverer at the lab, which gives .msf
 output files.
 I wanted to try the Ascore program to see whether this could help in
 localizing the phosphorylation site in peptides we measured.
 but this requires pepXML input. I've tried to find a converter
 for .msf into pepXML but it doesn't seem to exist, is that right?

 is there any possibility to extract a SEQUEST .out file from within
 proteome discoverer or any work-around to create pepXML from proteome
 discoverer results through other data output formats?

 thanks kindly for any help!
 Thomas

 student, University of Antwerp, Belgium

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



[spctools-discuss] bioinformatics positions in Memphis TN

2011-03-16 Thread Jimmy Eng
In case anyone out there is interested ...

St. Jude Children’s Research Hospital's proteomics core and the lab of
Junmin Peng currently has two bioinformatics positions available.  If
you think you might be interested and want more information, check out
job #s 20477 and 20478 at https://jobs.stjude.org/CSS_External.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



[spctools-discuss] Re: -Etrypsin error and ReAdW error.

2011-03-03 Thread Jimmy Eng
I'm stumped as I can't replicate the error.  Ruby sent me her data and
I had no problems running Out2XML to generate the pep.xml file.  Both
on a Windows box and under Linux.

- Jimmy

On Mar 1, 1:58 pm, Ruby rgund...@jhmi.edu wrote:
 Hi-
 I just downloaded the latest version of TPP this week, so version:
 TPP v4.4 VUVUZELA rev 1, Build 201010121551 (MinGW)

 I'm still really hoping for an answer for the -Etrypsin error.I'm
 stuck until I get that figured out.

 Thanks!!
 Ruby

 On Feb 28, 12:04 pm, Joe Slagel jsla...@systemsbiology.org wrote:







  Hi Rudy,

  For both questions you have, what version of TPP are you using? and I'm
  assuming you are trying to convert the .RAW to mzXML using the petunia
  interface?  (And I'll defer to others on your first question...unless you
  don't get an answer).

  -Joe

  On Sat, Feb 26, 2011 at 6:02 PM, Ruby rubycell7...@gmail.com wrote:
   Hi-
   1) I'm trying to use the TPP to process my Sequest search results.
   (since I have access to sequest only on a cluster, I'm not able to use
   the TPP to setup the search, which is what I'd like to do). Rather,
   I'm trying to use the .out files for peptide  proteinprophet
   processing.

   However, I get this error when I try to process my directory that has
   the .out files from the Sequest search.

   command c:\Inetpub\tpp-bin\Out2XML c:/Inetpub/wwwroot/ISB/data/
   RG_81_2 1 -Etrypsin failed: Unknown error

   Is this due to the fact that I did a semi-enzyme search? How can I
   work around this? I didn't see a semi-enzyme choice in the 'options'
   section of the convert to pepXML page.

   2) I'd like to do an XTandem search via the TPP. When I try to
   convert .RAW to mzXML, I get this error:
   'c:\Inetpub\tpp-bin\ReAdW' is not recognized as an internal or
   external command, operable program or batch file.

   I thought that TPP wasn't using ReAdW anymore, but rather the
   MSConverter. I have downloaded the MSFileReader, but what parts of
   that go in the tpp-bin directory?? Basically, how do I get it
   configured so that it uses MSConvert for processing the .RAW files?

   Thanks!
   Ruby

   --
   You received this message because you are subscribed to the Google Groups
   spctools-discuss group.
   To post to this group, send email to spctools-discuss@googlegroups.com.
   To unsubscribe from this group, send email to
   spctools-discuss+unsubscr...@googlegroups.com.
   For more options, visit this group at
  http://groups.google.com/group/spctools-discuss?hl=en.

  --
  Joe Slagel
  Institute for Systems Biology
  jsla...@systemsbiology.org
  (206) 732-1362

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: -Etrypsin error and ReAdW error.

2011-03-01 Thread Jimmy Eng
Ruby,

The problem likely has nothing to do with the -Etrypsin option in the
Out2XML command.  Do you have a bunch of *.out files in the directory
c:\Inetpub\wwwroot\ISB\data\RG_81_2\

Are those files named something like RG_81_2.1.1.1.out?

- Jimmy

On Tue, Mar 1, 2011 at 1:58 PM, Ruby rgund...@jhmi.edu wrote:
 Hi-
 I just downloaded the latest version of TPP this week, so version:
 TPP v4.4 VUVUZELA rev 1, Build 201010121551 (MinGW)

 I'm still really hoping for an answer for the -Etrypsin error.I'm
 stuck until I get that figured out.

 Thanks!!
 Ruby

 On Feb 28, 12:04 pm, Joe Slagel jsla...@systemsbiology.org wrote:
 Hi Rudy,

 For both questions you have, what version of TPP are you using? and I'm
 assuming you are trying to convert the .RAW to mzXML using the petunia
 interface?  (And I'll defer to others on your first question...unless you
 don't get an answer).

 -Joe



 On Sat, Feb 26, 2011 at 6:02 PM, Ruby rubycell7...@gmail.com wrote:
  Hi-
  1) I'm trying to use the TPP to process my Sequest search results.
  (since I have access to sequest only on a cluster, I'm not able to use
  the TPP to setup the search, which is what I'd like to do). Rather,
  I'm trying to use the .out files for peptide  proteinprophet
  processing.

  However, I get this error when I try to process my directory that has
  the .out files from the Sequest search.

  command c:\Inetpub\tpp-bin\Out2XML c:/Inetpub/wwwroot/ISB/data/
  RG_81_2 1 -Etrypsin failed: Unknown error

  Is this due to the fact that I did a semi-enzyme search? How can I
  work around this? I didn't see a semi-enzyme choice in the 'options'
  section of the convert to pepXML page.

  2) I'd like to do an XTandem search via the TPP. When I try to
  convert .RAW to mzXML, I get this error:
  'c:\Inetpub\tpp-bin\ReAdW' is not recognized as an internal or
  external command, operable program or batch file.

  I thought that TPP wasn't using ReAdW anymore, but rather the
  MSConverter. I have downloaded the MSFileReader, but what parts of
  that go in the tpp-bin directory?? Basically, how do I get it
  configured so that it uses MSConvert for processing the .RAW files?

  Thanks!
  Ruby

  --
  You received this message because you are subscribed to the Google Groups
  spctools-discuss group.
  To post to this group, send email to spctools-discuss@googlegroups.com.
  To unsubscribe from this group, send email to
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 Joe Slagel
 Institute for Systems Biology
 jsla...@systemsbiology.org
 (206) 732-1362

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] X!Tandem metabolic labeling with N15

2011-02-17 Thread Jimmy Eng
XPRESS supports N15 metabolic labeling.  I don't think ASAPRatio does
but it's possible that functionality was added recently w/o my
knowledge of it.

You're going to have to run 2 separate searches.  One search will
measure the normal (or light) peptides; no modifications are
specified here.  The other search will measure the heavy peptides;
statically modify each amino acid residue based on the number of
nitrogens.  You will have to do two separate XPRESS analysis for the
light and heavy runs.  Then combine together (say with ProteinProphet,
feeding in the two separate PeptideProphet-ed runs).


On Thu, Feb 17, 2011 at 12:17 PM, GATTACA dfer...@umich.edu wrote:
 Hello.

 A user wants to perform metabolic labelling (N15 labelling) on with
 their sample.
 I have two questions:
 1) Can TPP handle N15 metabolic labelling?
 2) How do you specify this modification in X!Tandem?

 Thanks in advance.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] processing crux/tide results with tpp

2011-02-02 Thread Jimmy Eng
Andreas,

I didn't say it was going to be easy! :)

Here's an example Sequest pep.xml file:
http://proteomicsresource.washington.edu/dist/sequest.pep.xml

As Matt suggested, the search_score names will need to change
(xcorr_score to xcorr and delta_cn to deltacn).  Additionally,
you'll probably want to add deltacnstar, spscore, and sprank
score entries.  Use 0.0, 0.0, and 1 respectively for the values for
these additional scores.

- Jimmy

On Wed, Feb 2, 2011 at 8:42 AM, Matthew Chambers
matt.chamber...@gmail.com wrote:
 Did you try renaming the score names? I expect that there is a 1:1
 relationship between the search engine name and the score names (correct me
 if I'm wrong TPP folks). If you need a program to do the replace all you
 can try Notepad++ or Gvim.

 -Matt


 On 2/2/2011 10:22 AM, Andreas Quandt wrote:

 sure :-)

 originally the spectrum query match looks like

 spectrum_query spectrum=B08-02057.mzXML.00234.00234.3

 but after it did not work. i tried the B08-02057.00234.00234.3 instead:


 spectrum_query spectrum=B08-02057.00234.00234.3 start_scan=234
 end_scan=234
 precursor_neutral_mass=1332.6912 assumed_charge=3 index=4
 search_result
 search_hit hit_rank=2 peptide=GAFALTLIGLLLM peptide_prev_aa=R
 peptide_next_aa=-
 protein=DECOY_Q8TE58 num_tot_proteins=1
 calc_neutral_pep_mass=1332.7074 massdiff=-0.0162
 num_tol_term=2 num_missed_cleavages=0  is_rejected=0
 protein_descr=\ID=ATS15_HUMAN \MODRES=
 \VARIANT=(623|N|S)(770|Q|R)(878|C|G) \NCBITAXID=9606 \DE=A disintegrin and
 metalloproteinase with
 thrombospondin motifs 15 (ADAMTS-15) (ADAM-TS 15) (ADAM-TS15)
 search_score name=delta_cn value=0. /
 search_score name=xcorr_score value=-0.04533549 /
 /search_hit
 /search_result
 /spectrum_query
 spectrum_query spectrum=B08-02057.00244.00244.5 start_scan=244
 end_scan=244
 precursor_neutral_mass=2591.2581 assumed_charge=5 index=5
 search_result
 search_hit hit_rank=1 peptide=HMLDVLLLFIAVIVALPVLAELGM
 peptide_prev_aa=R peptide_next_aa=-
 protein=DECOY_P10635 num_tot_proteins=1
 calc_neutral_pep_mass=2591.2922 massdiff=-0.0342
 num_tol_term=2 num_missed_cleavages=0  is_rejected=0
 protein_descr=\ID=CP2D6_HUMAN \MODRES=

 \VARIANT=(11|V|M)(26|R|H)(28|R|C)(34|P|S)(42|G|R)(85|A|V)(91|L|M)(94|H|R)(107|T|I)(120|F|I)(155|E|K)(169|G|R)(212|G|E)(231|L|P)(237|A|S)(281|.|)(296|R|C)(297|I|
 search_score name=delta_cn value=0. /
 search_score name=xcorr_score value=0.37809423 /
 /search_hit
 /search_result
 /spectrum_query
 spectrum_query spectrum=B08-02057.00246.00246.4 start_scan=246
 end_scan=246
 precursor_neutral_mass=2007.0232 assumed_charge=4 index=6
 search_result

 (to the original sequest, the difference is also that the scores are
 named: xcorr and deltacn...)

 --andreas

 On Wed, Feb 2, 2011 at 5:04 PM, Matthew Chambers
 matt.chamber...@gmail.com
 mailto:matt.chamber...@gmail.com wrote:

    Can you paste a few of the spectrum_query elements? It seems like you
 got it to process properly
    as SEQUEST, so there's something wrong further down.

    -Matt

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] processing crux/tide results with tpp

2011-02-01 Thread Jimmy Eng
Andreas,

There's definitely no support for Crux in TPP 4.4.1 if that's what you
tried.  The semi good news is that David does have code in the Sashimi
SourceForge repository to start supporting Crux so maybe it'll work in
the next full TPP release.  You might be able to push analysis through
by editing your Crux pep.xml file and making it look like a Sequest
pep.xml file for the time being.

On Tue, Feb 1, 2011 at 3:18 PM, Andreas Quandt quandt.andr...@gmail.com wrote:
 dear list,
 i tried processing some pep.xml produced by crux with the tpp.
 however i get always the same error message (i tried xinteract and the
 peptideprophetparser both with the Pd option):
 error: engine Crux not recognized
 is the any possible chance to process these results with the tpp?
 cheers,
 andreas

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: PepXMLViewer.cgi error on TPP4.4

2011-01-25 Thread Jimmy Eng
XPRESS had a bug in TPP 4.4.0 which was addressed in TPP 4.4.1.  What
solution to what problem are you looking for?

On Tue, Jan 25, 2011 at 2:18 AM, furukie efur...@etc.a-star.edu.sg wrote:
 Hi,

 I managed to solve my problem and its running successfully! I am still
 having slight problems with the XPress ratio, though. I'm still
 reading about the error that's been previously posted by others.
 Hopefully the solution is out there!

 Thanks for the help!

 -Eiko


 On Jan 24, 7:30 pm, furukie efur...@etc.a-star.edu.sg wrote:
 Hi,

 so I tried running with probability filter 0 and included more
 files. The PepXMLViewer.cgi error did not pop out this time, however,
 the command still failed. When I tried to open the .shtml file, I
 get:-

 could not delete /tmp/PepXMLViewera04956, error 13 Permission denied
 error:
 unable to read pepxml file: error parsing pepxml file: XML parsing
 error: no element found, at xml file line 357955, column 0

 From the tpp petunia, the command reads:-

  run_in c:/Inetpub/wwwroot/ISB/data/P00416; c:\Inetpub\tpp-bin
 \xinteract  -NP00416.interact.pep.xml -p0 -l7 -Op -X-m1.0-L-c5-p1 c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B01Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/data/
 P00416/PHHUP00416G00336GL03B02Mix.tandem.pep.xml c:/Inetpub/wwwroot/
 ISB/data/P00416/PHHUP00416G00336GL03B03Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B04Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B05Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/data/
 P00416/PHHUP00416G00336GL03B06Mix.tandem.pep.xml c:/Inetpub/wwwroot/
 ISB/data/P00416/PHHUP00416G00336GL03B07Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B08Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B09Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/data/
 P00416/PHHUP00416G00336GL03B10Mix.tandem.pep.xml c:/Inetpub/wwwroot/
 ISB/data/P00416/PHHUP00416G00336GL03B11Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B12Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B13Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/data/
 P00416/PHHUP00416G00336GL03B14Mix.tandem.pep.xml c:/Inetpub/wwwroot/
 ISB/data/P00416/PHHUP00416G00336GL03B15Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B16Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/taxonomy.xml

 c:\Inetpub\tpp-bin\xinteract (TPP v4.4 VUVUZELA rev 1, Build
 201010121551 (MinGW))

 running: C:/Inetpub/tpp-bin/InteractParser P00416.interact.pep.xml
 c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B01Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/
 data/P00416/PHHUP00416G00336GL03B02Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B03Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B04Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/
 data/P00416/PHHUP00416G00336GL03B05Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B06Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B07Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/
 data/P00416/PHHUP00416G00336GL03B08Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B09Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B10Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/
 data/P00416/PHHUP00416G00336GL03B11Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B12Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B13Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/
 data/P00416/PHHUP00416G00336GL03B14Mix.tandem.pep.xml c:/Inetpub/
 wwwroot/ISB/data/P00416/PHHUP00416G00336GL03B15Mix.tandem.pep.xml c:/
 Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B16Mix.tandem.pep.xml c:/Inetpub/wwwroot/ISB/
 data/P00416/taxonomy.xml -L7
  file 1: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B01Mix.tandem.pep.xml
  file 2: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B02Mix.tandem.pep.xml
  file 3: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B03Mix.tandem.pep.xml
  file 4: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B04Mix.tandem.pep.xml
  file 5: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B05Mix.tandem.pep.xml
  file 6: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B06Mix.tandem.pep.xml
  file 7: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B07Mix.tandem.pep.xml
  file 8: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B08Mix.tandem.pep.xml
  file 9: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B09Mix.tandem.pep.xml
  file 10: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B10Mix.tandem.pep.xml
  file 11: c:/Inetpub/wwwroot/ISB/data/P00416/
 PHHUP00416G00336GL03B11Mix.tandem.pep.xml
  file 12: c:/Inetpub/wwwroot/ISB/data/P00416/
 

Re: [spctools-discuss] X!Tandem Cyclone with TPP for 64bit

2011-01-25 Thread Jimmy Eng
There's no inherent limitation keeping the  TPP's version of Tandem
from using 100% of your CPU.  Did you set the spectrum, threads
parameter to 4 for your quad core CPU?  Any chance you're disk or I/O
bound?

I can't help you with Tandem Cyclone  TPP; it may work.  Do a quick
search and see if you can convert the results to pep.xml and pass
through TPP pipeline.

On Tue, Jan 25, 2011 at 6:49 AM, Zac zacmc...@gmail.com wrote:
 I am using windows 7 64 bit on a quad core PC.  I noticed that when I
 used the tandem packaged with TPP, only 15% of the CPU was used during
 the search.  I thought I would try downloading X!Tandem Cyclone which
 supports 64bit to get full use of the processing power.

 Is this the best way to go for 64 bit?

 thanks

 Zac Mc Donald

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-discuss@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: XPRESS discrepancy between PepXML viewer and XPRESS viewer

2010-12-20 Thread Jimmy Eng
Oliver,

I finally had a chance to revert to 4.3.1 on two machines (linux 
windows desktop), run XPRESS on an old ICAT dataset, and view the
ratios using new 4.4.1 XPressUpdateParser.cgi.  On both systems I
don't see the inconsistent ratios being reported for this dataset.
Then I found some Orbi SILAC datasets which were run under 4.3.1.
Viewing the ratios  chromatograms using the current 4.4.1 cgi viewer
shows the exact same ratios as calculated by 4.3.1 XPRESS.

At this point, I can't replicate the discrepancy you're seeing.  My
advice would be to run your analysis again and see if the discrepancy
remains.  If you still see the problem, isolate a small dataset
(single lcms run) and send it to me (mzXML, pep.xml) along with your
XPRESS run parameters.

- Jimmy

On Mon, Dec 13, 2010 at 10:02 AM, oschill...@gmail.com
oschill...@gmail.com wrote:
 Sorry for my late reply: the discrepancy occurs when viewing a TPP 4.3
 analysis with TPP 4.4. We have now rolled back to TPP 4.3 to keep our
 XPRESS analysis consistent. Any advice on how to proceed in the
 future?

 Thanks

 Oliver

 On Nov 23, 5:46 pm, Jimmy Eng jke...@gmail.com wrote:
 Oliver,

 What parameters did you use to runXPRESS?  The GUI showing elution
 profiles has no current support for the isotope option (summed
 intensities of first N isotope peaks) but otherwise should return the
 same ratios as that shown in the pepXML file.

 - Jimmy

 On Mon, Nov 22, 2010 at 5:09 AM, oschill...@gmail.com



 oschill...@gmail.com wrote:
  Dear TPP community,

  we notice a small discrepancy between theXPRESSvalues displayed in
  the PepXML viewer tab (table with peptide sequences etc) and the
 XPRESStab (graphic display of elution peak). For almost all peptides,
  we observe slightly differentXPRESSvalues in both tabs. For example,
  a peptide has anXPRESSvalue of 2.32:1 in the PepXML viewer and
  2.27:1 in theXPRESSviewer.

  Our impression is that this discrepancy occurs as of TPP 4.4.1 and
  does not occur for TPP 4.3.x.

  Can anyone please advice us on how to proceed here?

  Thanks a lot

  Oliver

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to 
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] FDR in PeptideProphet

2010-11-25 Thread Jimmy Eng
It should be number(decoys)/number(targets) irrespective of
concatenated or separate searches.

On Wed, Nov 24, 2010 at 5:20 AM, Amit Yadav amit007thech...@gmail.com wrote:
 Hi,

 From what I understand, the simplest ways to calculate FDR using a
 Target-Decoy search are:-

 1. Run a Concatenate target and decoy database search. The formula
 would be :    2  *number(decoys)/ number(target+decoys) - See Elias
 and Gygi , Nat Methods. 2007 Mar;4(3):207-14.

 2. Run Separate target and decoy searches. The formula would be:
 number(decoys)/number(targets). See Kall et al J Proteome Res. 2008
 Jan;7(1):29-34

 I think your formula should be the first one. Members, please correct
 me if I am wrong.

 Regards,

 Amit Kumar Yadav
 Senior Research Fellow (SRF-CSIR)
 IGIB, New Delhi (India)

 MassWiz Web server
 MassWiz sourceforge project
 MassWiki



 On Wed, Nov 24, 2010 at 6:39 PM, Bjorn caveman.bj...@gmail.com wrote:

 Hello all,

 I am trying to find my way through the different interpretations and
 calculations of the FDR, but I'm a bit lost where it comes to the
 PeptideProphet.

 So far, I know that the simplest calculation is (decoy_hits/
 target_hits). So, I ran X!tandem on my dataset (target DB +
 concatenated decoy DB) and then ran PeptideProphet on the resulting
 pep.xml file (not setting the decoy option!).
 I then set a certain probability threshold (say 0.85) and looked at
 the number of spectra (e.g. 1200 of 1500). Setting the protein text to
 decoy gives me, e.g. 50 hits. The FDR for a prob. of 0.85 would then
 be 50/1150. If I look at the table displayed in the PeptideProphet,
 the FDR I found is almost spot-on with the number in the table. The
 thing is that if I lower my probabilty threshold, the FDRs in the
 table stay relatively low, while the FDRs I calculate with the above
 formula go up. Is there any extra correction being done in the
 PeptideProphet (like correct for the percentage of incorrect target
 hits (PIT) as described by Kall et al in 2008)?
 Or am I missing something vital here?

 Furthermore, how is the estimated number of correct assignments
 calculated? I can't seem to find the correct formula to come to that
 number? I assume it is related to the surface under the curve plotted
 with the correct identifications?

 Many questions, but I hope someone can help me with this.

 Best regards,

 bjorn

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] XPRESS discrepancy between PepXML viewer and XPRESS viewer

2010-11-23 Thread Jimmy Eng
Oliver,

What parameters did you use to run XPRESS?  The GUI showing elution
profiles has no current support for the isotope option (summed
intensities of first N isotope peaks) but otherwise should return the
same ratios as that shown in the pepXML file.

- Jimmy

On Mon, Nov 22, 2010 at 5:09 AM, oschill...@gmail.com
oschill...@gmail.com wrote:
 Dear TPP community,

 we notice a small discrepancy between the XPRESS values displayed in
 the PepXML viewer tab (table with peptide sequences etc) and the
 XPRESS tab (graphic display of elution peak). For almost all peptides,
 we observe slightly different XPRESS values in both tabs. For example,
 a peptide has an XPRESS value of 2.32:1 in the PepXML viewer and
 2.27:1 in the XPRESS viewer.

 Our impression is that this discrepancy occurs as of TPP 4.4.1 and
 does not occur for TPP 4.3.x.

 Can anyone please advice us on how to proceed here?

 Thanks a lot

 Oliver

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: pepXMLViews with mzXML generated from mgf file ....

2010-11-08 Thread Jimmy Eng
Jagan,

Hopefully searches run as in steps 1 through 4 have no issues with
visualizing the spectra.

Regarding points 6 and 7, I can't tell you definitely why there's a
difference in scan counts without knowing specifics of how the MGF was
created.  If I had to guess, one thing that accounts for the scan count
differences is that the mzXML contains both MS1 and MS/MS scans whereas the
MGF file only has MS/MS scans in it.  Likely that MGF contains just a subset
of all MS/MS scans from the raw file due to not meeting some criteria.  Both
of these reasons and many others could easily explain the scan count
differences.  For a definitive explanation of why maybe the MS/MS counts
don't match up, you need to go back to the tool that generated the MGF to
figure out what it did.  I didn't bother downloading your raw file but
you'll hopefully find that it has the same number of total scans as the
mzXML does (unless you have empty scans in the raw file).

Regarding what you did in step 5, the mzXML created from the mgf starts with
scan number 0 which is a tad unorthodox as 1 is the required first scan for
a valid mzXML.  But that's not necessarily the reason why you can't
visualize spectra from this step 5 workflow.  As I stated in the previous
post, this mgf *must* contain the right information encoded in the TITLE
line that the TPP expects.  The first spectrum that got translated to scan 0
in the mzXML must have a TITLE like that conforms to something like
TITLE=basename.0.0.charge and the second spectrum that got translated to
scan 1 in the mzXML must have a TITLE line that appears as
TITLE=basename.1.1.charge.  You can pad the scan numbers if you wish
like MzXML2Search does:  basename.1.1.charge.  And this TITLE
information should exist in the mgf before you do the search.


On Sun, Nov 7, 2010 at 4:30 PM, Jagan Kommineni
jagan.kommin...@gmail.comwrote:

 Dear Jimmy,

 Here are the steps that we followed in converting files.

 1. Original RAW file has been converted into mzXML file using msconvert in
 Windows Environment.

 \bin\msconvert\msconvert.exe --mzXML --64 --mz64 --inten64 --filter
 peakPicking true 1- 
 jagan-J2205.RAWhttps://search.apcf.edu.au/dbdownloads/jagan-J2205.RAW.zip


 2. mzXML file is converted into mgf format using mzXML2Search.exe in
 Windows Environment.

 bin\ISB\mzXML2Search.exe -mgf jagan2205.mzXML

 3. The generated mgf file has been used to run OMSSA search and then pepXML
 file has been created to view from the results with pepXMLViewer.
 Step 3 is performed in Linux environment.

 4. Whenever user specifes to run prophets (peptide and protein prophets) on
 the server we transfer the mzXML file otherwise we transfer only mgf files
 to perform stanard OMSSA search.

 5. In the latter case where user wants perform standard OMSSA search we
 don't have mzXML file but on the fly I am trying to generate mzXML file from
 the mgf file using msconvert.

 I have used same parameters that used to generate RAW to mzXML file in
 generating mzXML file from mgf.

 /var/www/APCF_WEB/tpp/bin/msconvert --mzXML --64 --mz64 --inten64 --filter
 peakPicking true 1- jagan-J2205.mgf

 6. The scan count in mzXML generated from RAW file is 9800 whereas scan
 count in mzXML file generated from MGF file is 3444.
 Here are respective entries in mzXML files ...

 msRun scanCount=9800 startTime=PT0.1826S endTime=PT5340.21S
  msRun scanCount=3444 startTime=PT1.88S endTime=PT5339.11S

 7. The number of entries in mgf file is 3444.

 [r...@apcf-hn3 downloads]# grep BEGIN IONS jagan-J2205.mgf|wc -l
 3444

 8. I put all the files on APCF Community Wiki Page and can be downloaded
 from the following web pointer

 
 https://search.apcf.edu.au/wiki/index.php/Apcfwiki:Community_Portal#APCF_Files_for_Jimmy_Eng_and_TPP_team
 .

 9. I don't have any issues in viewing the petide and protein information
 from the pepXML file.

 we have also tried by using -L15000 in running mzXML2Search.exe but there
 no difference in the scan count of mgf file.



 with regards,

 Jagan Kommineni



 On Fri, Nov 5, 2010 at 1:56 PM, Jimmy Eng jke...@gmail.com wrote:

 Jagan,

 Start with your mzXML file.  Generate mgf using MzXML2Search -mgf
 yourfile.mzXML.  Use this to do  your omssa search and hopefully this fixes
 your problem.

 Compare the TITLE lines of the mgf generated by MzXML2Search with your
 standard mgf you used for the search.  That difference is the reason why
 you can't see spectra as the required encoded information (mzXML base name
 and scan #) are likely not in the mgf file you used to search.  Without this
 information encoded in a specific way, there's no way that the TPP will know
 which spectrum belongs to which peptide ID.

 Good luck!

 (Also, you should be able to export pep.xml directly from omssa using the
 -op option but presumably this isn't a part of the problem.)

 On Thu, Nov 4, 2010 at 7:14 PM, Jagan Kommineni 
 jagan.kommin...@gmail.com wrote:

 Dear All,

 I havn't received my presvious message

Re: [spctools-discuss] Re: pepXMLViews with mzXML generated from mgf file ....

2010-11-08 Thread Jimmy Eng
As far as I'm aware, there's no tools that can do this.  Hopefully others
can chime in if they know otherwise.

On Mon, Nov 8, 2010 at 7:59 PM, Jagan Kommineni
jagan.kommin...@gmail.comwrote:

 I wonder whether these is any way possible to generate mzXML file from MGF
 file to match the scan number as that of original scan numbers.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: building tpp in VC8

2010-11-05 Thread Jimmy Eng
I just tried readw on wine for the very first time this afternoon
(version 1.2.1 on a RHEL 5.5 box); I'm impressed with how easy it was
to setup.  Both very old LCQ and new Orbi RAW files generated mzXML's
that were *exactly* the same between wine and native Windows
conversions.  'diff' returned no differences so scan offset indices
and checksums were exact matches too.

On Fri, Nov 5, 2010 at 11:11 AM, Jeffrey Milloy
jeffrey.a.mil...@gmail.com wrote:
 Nice! I did a lengthy check as well just to be sure and found the same
 thing, including msconvert MzXMLs as well. Unfortunately, I wasn't
 here when we set our wine readw system up so I don't really know what
 went into it.

 On Nov 5, 1:29 pm, dctrud david.trudg...@path.ox.ac.uk wrote:
 Just read this post and though I'd check our wine ReAdW output vs
 native XP using diff. Like Jeffrey we're using Trapper and ReAdW on
 Ubuntu (10.04 LTS), wine version 1.2. Were previously using it on
 Ubuntu 9.04 for over a year, with whatever version of wine that had.
 Never had any problems using the MzXMLs converted through wine, and
 it's *very* nice to be able to do it on a fast server directly
 connected to the main data storage array rather than on a windows
 desktop.

 sha1 hashes for the files are different, but diff shows that the only
 differences are:

 1) parentFile tag has a different filename attribute (since the .RAW
 is in a different place)
 2) all of the offset values are sightly different, because the
 parentFile filename attribute is a few bytes different
 3) indexOffset is different (again due to the filename difference)
 4) sha1 is different due to the above differences between the files.

 Other than this the wine version and windows version of the mzXML are
 identical. No differences at all in any of the markup or encoded
 spectra other than for the tags listed above.

 Cheers,

 DT

 On Oct 14, 11:47 pm, Natalie Tasman natalie.tas...@insilicos.com
 wrote:







  Have you done comparisons between mzXML output from your WINE
  installation and a native Windows environment?  I would expect that
  there will be some subtle but important differences between the two.
  Like I've said before on this list, and as Matt stated on this thread,
  if you have this working 100% completely it's the first time it's been
  reported.  Nice job if so.

  -Natalie

  On Thu, Oct 14, 2010 at 11:14 AM, Jeffrey Milloy

  jeffrey.a.mil...@gmail.com wrote:
   XCalibur and ReAdW have been running in wine for more than two years
   now, through several ubuntu upgrades and wine updates. Seems totally
   stable, although now I'm hesitant to update XCalibur and ReADdW to
   current versions! This is one reason why I figure this is a reasonably
   route for adding my own functionality.

   On Oct 14, 2:06 pm, Matthew Chambers matt.chamber...@gmail.com
   wrote:
     I wouldn't recommend getting your hopes up on using XRawfile (a COM 
   interface to an MFC library) with Wine. I have not heard of anyone able 
   to do this and
   reproduce it reliably on other machines. It may only work with a narrow 
   set of Wine versions, kernel version, x86 emulation, etc. Your time 
   would probably be
   better spent creating a true Windows VM to do this. On the other hand, 
   if anybody is able to do this reliably and I just haven't heard about 
   it, I sure would
   like to know.

   -Matt

   On 10/14/2010 12:56 PM, Jeffrey Milloy wrote:

I'm having a hard time building tpp. My interest in the tpp source is
in reading XRawfile metadata using c++. If anyone has a different
suggestion for using the XRawfile2 DLL with c++ I would be grateful.
C# won't work for me because I want to run the code in wine/mono and
CSLID is not supported in mono.

   --
   You received this message because you are subscribed to the Google 
   Groups spctools-discuss group.
   To post to this group, send email to spctools-disc...@googlegroups.com.
   To unsubscribe from this group, send email to 
   spctools-discuss+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] ASAP Ratio with Mascot Search

2010-10-22 Thread Jimmy Eng
 Similarly, if
 only the difference between light and heavy residues are entered for
 XPRESS, how does this program know the mass of the light and heavy
 tagged residues?

Regarding this question, XPRESS doesn't need to know the mass of the
tagged residues as that information isn't necessary.  Knowing the
labeled residues which you specify, it determines if a peptide is the
light or heavy form based on the modifications from the search because
you have to run the search in a specific way.  Peptides with no
variable modifications on the specified residues are light and
peptides with variable modifications on all of the specified residues
are heavy.  The mass of the identified peptide is known and
calculated from the database search.  XPRESS just needs to get the
mass of the paired peptide now.

Once it knows that a peptide is either light or heavy, it can
calculate the m/z of the corresponding pair by counting the number of
labeled residues in the peptide (which it knows 'cause you specified
the labeled residues) and their mass difference (which you also
specify).  The # of labeled residues and mass difference gives the
mass to either add or subtract from the identified peptide mass to
calculate the mass of the other pair.  Hope I explained this clearly
and hopefully someone else will chime in on your ASAPRatio questions.

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: ASAP Ratio with Mascot Search

2010-10-22 Thread Jimmy Eng
Determining light or heavy peptide is simple and shouldn't be a black
box:  No variable mods on the specified residues equals light peptide.
 Variable mods on all of the specified residues equals heavy peptide.
If there's a mixture (in your case an unmodified + a modified lysine
in the same peptide), then that peptide is ignored.

On Fri, Oct 22, 2010 at 2:20 PM, Mike michael.gorm...@gmail.com wrote:
 Thanks for your prompt reply Jimmy.  It is clear to me how XPRESS
 calculates the mass difference between light and heavy labeled
 peptides.  As you describe, this is simply a matter of knowing the
 number of labeled residues in the peptide and the associated mass
 differences.  It is unclear how the program identifies whether a
 peptide is light labeled or heavy.  For instance, let us say that I
 have detected a doubly charged peptide at an m/z of 900.  If this is
 the light labeled peptide, I would expect to see the heavy labeled
 peptide pair at an m/z of 904.  If this is the heavy labeled peptide,
 I expect to see the light labeled pair at an m/z of 896.  I can assume
 that XPRESS obtains the information regarding whether a peptide is
 light or heavy labeled based on which variable modifications match the
 spectra in the MASCOT search.  This aspect of the analysis is working
 however, so I am content to keep this process in a black box.

 I suspect my problems with ASAP ratio come from discrepancies in the
 way the Mascot search was done and what ASAP ratio expects to find in
 the pepXML file.

 On Oct 22, 4:53 pm, Jimmy Eng jke...@gmail.com wrote:
  Similarly, if
  only the difference between light and heavy residues are entered for
  XPRESS, how does this program know the mass of the light and heavy
  tagged residues?

 Regarding this question, XPRESS doesn't need to know the mass of the
 tagged residues as that information isn't necessary.  Knowing the
 labeled residues which you specify, it determines if a peptide is the
 light or heavy form based on the modifications from the search because
 you have to run the search in a specific way.  Peptides with no
 variable modifications on the specified residues are light and
 peptides with variable modifications on all of the specified residues
 are heavy.  The mass of the identified peptide is known and
 calculated from the database search.  XPRESS just needs to get the
 mass of the paired peptide now.

 Once it knows that a peptide is either light or heavy, it can
 calculate the m/z of the corresponding pair by counting the number of
 labeled residues in the peptide (which it knows 'cause you specified
 the labeled residues) and their mass difference (which you also
 specify).  The # of labeled residues and mass difference gives the
 mass to either add or subtract from the identified peptide mass to
 calculate the mass of the other pair.  Hope I explained this clearly
 and hopefully someone else will chime in on your ASAPRatio questions.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: ASAP Ratio with Mascot Search

2010-10-22 Thread Jimmy Eng
In the scenario you describe, XPRESS would think that all of the
identified peptides were heavy and quantify the corresponding light
partner by always subtracting a mass from the identified peptide mass.
 So check your results to confirm whether or not this is the case; I'd
have to think that half of the quantitation numbers are wrong (those
where the +140 mod peptides were considered heavy).

The supported search method is to use static mods of 140 and a
variable mod of 8.

On Fri, Oct 22, 2010 at 2:38 PM, Mike michael.gorm...@gmail.com wrote:
 In the MASCOT search, I specify a variable modification for the n-
 terminus and lysine residue for both light and heavy tags.  This
 allows for the identification of unlabeled peptides.  In this
 framework, no variable mods is unlabeled, whereas variable mod with an
 addition of 140 Da is light and variable mod with an addition of 148
 Da is heavy.  Perhaps this is discrepancy that is causing me problems
 with ASAP ratio.  Should the MASCOT search be formatted such that the
 light tag is a fixed modification and the difference between heavy and
 light is a variable modification?

 On Oct 22, 5:26 pm, Jimmy Eng jke...@gmail.com wrote:
 Determining light or heavy peptide is simple and shouldn't be a black
 box:  No variable mods on the specified residues equals light peptide.
  Variable mods on all of the specified residues equals heavy peptide.
 If there's a mixture (in your case an unmodified + a modified lysine
 in the same peptide), then that peptide is ignored.



 On Fri, Oct 22, 2010 at 2:20 PM, Mike michael.gorm...@gmail.com wrote:
  Thanks for your prompt reply Jimmy.  It is clear to me how XPRESS
  calculates the mass difference between light and heavy labeled
  peptides.  As you describe, this is simply a matter of knowing the
  number of labeled residues in the peptide and the associated mass
  differences.  It is unclear how the program identifies whether a
  peptide is light labeled or heavy.  For instance, let us say that I
  have detected a doubly charged peptide at an m/z of 900.  If this is
  the light labeled peptide, I would expect to see the heavy labeled
  peptide pair at an m/z of 904.  If this is the heavy labeled peptide,
  I expect to see the light labeled pair at an m/z of 896.  I can assume
  that XPRESS obtains the information regarding whether a peptide is
  light or heavy labeled based on which variable modifications match the
  spectra in the MASCOT search.  This aspect of the analysis is working
  however, so I am content to keep this process in a black box.

  I suspect my problems with ASAP ratio come from discrepancies in the
  way the Mascot search was done and what ASAP ratio expects to find in
  the pepXML file.

  On Oct 22, 4:53 pm, Jimmy Eng jke...@gmail.com wrote:
   Similarly, if
   only the difference between light and heavy residues are entered for
   XPRESS, how does this program know the mass of the light and heavy
   tagged residues?

  Regarding this question, XPRESS doesn't need to know the mass of the
  tagged residues as that information isn't necessary.  Knowing the
  labeled residues which you specify, it determines if a peptide is the
  light or heavy form based on the modifications from the search because
  you have to run the search in a specific way.  Peptides with no
  variable modifications on the specified residues are light and
  peptides with variable modifications on all of the specified residues
  are heavy.  The mass of the identified peptide is known and
  calculated from the database search.  XPRESS just needs to get the
  mass of the paired peptide now.

  Once it knows that a peptide is either light or heavy, it can
  calculate the m/z of the corresponding pair by counting the number of
  labeled residues in the peptide (which it knows 'cause you specified
  the labeled residues) and their mass difference (which you also
  specify).  The # of labeled residues and mass difference gives the
  mass to either add or subtract from the identified peptide mass to
  calculate the mass of the other pair.  Hope I explained this clearly
  and hopefully someone else will chime in on your ASAPRatio questions.

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to 
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/spctools-discuss?hl=en.- Hide quoted text 
  -

 - Show quoted text -

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received

Re: [spctools-discuss] Re: building ReAdW (fann.h?)

2010-10-12 Thread Jimmy Eng
To keep everyone on the same page, even though this thread is
referring to building ReAdW in VC, I'm pretty sure David's successful
gcc/mingw builds refers to building the other TPP components and not
the mzXML converter that you're interested in.

On Tue, Oct 12, 2010 at 11:35 AM, Jeffrey Milloy
jeffrey.a.mil...@gmail.com wrote:
 From what I can see, the folder structure in tpp/extern/gsl-1.14
 doesn't match the include file paths anywhere in gsl. Any ideas why I
 have this discrepency?

 for example, in extern/gsl-1.14/linalg/balance.c:

 #include config.h
 #include stdlib.h
 #include gsl/gsl_math.h
 #include gsl/gsl_vector.h
 #include gsl/gsl_matrix.h
 #include gsl/gsl_blas.h

 In the vc8 solution, gsl-1.14 is listed as an include directory, which
 is fine except there is no gsl folder. The gsl headers cannot be
 found. I changed it to the following, but this problem pervades all of
 gsl.

 #include config.h
 #include stdlib.h
 #include gsl_math.h
 #include vector/gsl_vector.h
 #include matrix/gsl_matrix.h
 #include blas/gsl_blas.h

 How does it look for you, David?

 Jeff


 On Oct 12, 2:23 pm, David Shteynberg dshteynb...@systemsbiology.org
 wrote:
 Yes, build quite nicely for me using gcc under linux and mingw.

 -David

 On Tue, Oct 12, 2010 at 10:57 AM, Jeffrey Milloy



 jeffrey.a.mil...@gmail.com wrote:
  Hi Brian

  Building is no longer essential for me, but I gave it a try anyways.
  the fann libraries are sorted out, but the gsl project is a mess of
  missing/moved header files. Does trunk build for anyone right now?

  On Oct 12, 10:36 am, Brian Pratt brian.pr...@insilicos.com wrote:
  Try building from trunk.

  That fann stuff came in to the build fairly recently and was probably
  pushed into the branch without the accompanying changes to the MSVC
  build files.

  - Brian

  On Tue, Oct 12, 2010 at 7:02 AM, Jeffrey Milloy

  jeffrey.a.mil...@gmail.com wrote:
   Thanks for the quick reply Natalie. I am looking into msconvert to
   replace readw. Does msconvert check if the RAW file is in aquisition
   before generating the mzXML?

   First, when I try to build msconvert, I have the same problem. Maybe I
   should have titled my thread Buliding tpplib because that is where
   fann.h is required! A search for fann on the web gives a fast
   artificial neural network library - is this really what I need? Why am
   I unable to build tpplib out of the box, as others do?

   Secondly, I am currently interested the rawfile libraries in order to
   write some very short/simple code of my own. I am using readw as
   example code with working rawfile libs. In particular, I want to grab
   the comment and other header information in the RAW file in order to
   populate our in-house database. Is msconvert a more appropriate
   example to build from?

   On Oct 11, 6:14 pm, Natalie Tasman natalie.tas...@insilicos.com
   wrote:
   As one of the main developers on ReAdW, I'd suggest switching to the
   much more current msconvert program, part of the ProteoWizard project
   and included in the TPP (though likely a newer version is available
   directly from pwiz).  ReAW does not have a lot of attention on it,
   versus the very-well maintained pwiz project.  If there's no
   compelling reason to specifically use ReAdW, I'd switch.  If there is,
   please let the pwiz team know so the functionality can be added to
   msconvert.

   -Natalie

   On Mon, Oct 11, 2010 at 2:24 PM, Jeffrey Milloy

   jeffrey.a.mil...@gmail.com wrote:
I'm trying to build ReAdW under winxp.

After no success with Visual Studio 2010 Professional, I am using
Visual Studio 2005 Professional.

I checked out tags/release_4-4-0/trans_proteomic_pipeline, and tried
to build the ReAdW project under the Debug configuration. Everything
went well until tpplib, where it could not find fann.h:

6C:\sashimi\trans_proteomic_pipeline\src\Parsers/Algorithm2XML/
RTCalculator.h(29) : fatal error C1083: Cannot open include file:
'fann.h': No such file or directory

Any ideas?

Jeff

--
You received this message because you are subscribed to the Google 
Groups spctools-discuss group.
To post to this group, send email to 
spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group 
athttp://groups.google.com/group/spctools-discuss?hl=en.

   --
   You received this message because you are subscribed to the Google 
   Groups spctools-discuss group.
   To post to this group, send email to spctools-disc...@googlegroups.com.
   To unsubscribe from this group, send email to 
   spctools-discuss+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/spctools-discuss?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to 

Re: [spctools-discuss] correlation of ASAP and XPRESS quantitation

2010-09-14 Thread Jimmy Eng
Lukas,

These tools could definitely be optimized and improved, especially for
high res data that didn't really exist when they were first developed.
 That said, with respect to question 1, you're using two imperfect
tools on presumably large datasets so it's not surprising to see some
uncorrelated results.  If I were you, I would look at the extracted
chromatograms of these peptides to see how the two tools determined
the respective light and heavy areas used to calculate the ratios.
One (or both) likely failed for one reason or another which is one of
the unfortunate reasons why the not-very-user-friendly GUIs exist to
allow the researchers to look at and fix/edit/discard any particular
calculated ratio.  I know you don't want to look at chromatograms for
a quarter of your identifications but that's what you probably have to
do at this point.  Or throw them out.  Or stick with a single tool and
pretend the other didn't exist that gives conflicting results.

I don't know the answers to your other two questions, not that I
really had an answer to question 1, so hopefully someone else can
address those.

- Jimmy

On Tue, Sep 14, 2010 at 12:42 PM, lukas lukas.rei...@gmail.com wrote:
 Dear list,

 I have three question regarding ASAP and XPRESS:
 1)
 I analyzed a data set generated from a SILAC sample on an Orbitrap.
 The data was searched using Sequest with variable modifications on R
 and K. I used ASAP and XPRESS for quantitaton with the following
 command (TPP v4.3 JETSTREAM rev 1, Build 201004270833 (linux)):
 xinteract -dreverse_ -p0.8 -OAlpd -X-m0.1-nK,8.014199-nR,10.008269 -A-
 Z-r0.1-F-C-lKR-mK136.10916R166.109379 *.pep.xml

 When I make the correlation between log2(asap mean) and
 log2(xpress) I can see a large part of the data points nicely
 correlating but also many data points (roughly a quarter) having close
 to zero regulation by XPRESS but non-zero regulation by ASAP.
 I wonder whether I chose any wrong options or what the reason for this
 effect could be? Any hint would be greatly appreciated.

 2)
 ASAP threw the warning message:
 WARNING: Found more than one variable mod on 'K'. Please make sure to
 specify a heavy mass for this residue.
 WARNING: Found more than one variable mod on 'R'. Please make sure to
 specify a heavy mass for this residue.

 Could it be that the reason for this are peptides that were identified
 with one light and one heavy K/R and are most likely false positives?

 3)
 Where can I read about the difference between asap mean and
 asapratio_HL?

 Thanks for any help,
 Lukas

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



[spctools-discuss] bug in TPP 4.4 XPRESS

2010-09-10 Thread Jimmy Eng
For any folks using XPRESS with the 4.4 release of TPP, I was recently
notified of a bug which I validated is present.  A fix has been made
and is being tested.  I would suggest not using this particular
version of the tool until the next maintenance release is out; revert
to the 4.3.1 binary if you need to quantify data with XPRESS in the
near term (replace XPressPeptideParser.exe binary).  The error
manifests itself in a lot of ratios not being calculated and being
reported as -1.0.

- Jimmy

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: TPP 4.4.0 on mac os x 10.6.4 problem

2010-08-25 Thread Jimmy Eng
Bjorn,

In the link to the plot-msms.cgi spectrum viewer, add Debug=1 to
the URL.  This causes the CGI to not remove the intermediate files it
uses including the *.gp gnuplot script and the *.spec? files which are
the data files to plot (peak lists and peak annotations).

Do that once and then run gnuplot manually on the *.gp files and see
if corresponding .png files are created.  If not, the error messages
should help elucidate what some of the problems are that you're
experiencing.  (None of the above addresses the problem of the cgi not
being able to find/execute gnuplot if that problem still exists.)

- Jimmy

On Wed, Aug 25, 2010 at 8:13 AM, Bjorn caveman.bj...@gmail.com wrote:
 adding: sudo ln -s /usr/local/bin/gnuplot /usr/bin/gnuplot
 did not solve it either...
 bjorn

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: TPP 4.4.0 on mac os x 10.6.4 problem

2010-08-25 Thread Jimmy Eng
You did it all right.  Can you check which version of gnuplot you have
installed? The TPP requires 4.2 and  I suspect you might have an older
version installed.

Type 'gnuplot' on the command line to see what version you have.
'quit' will get you out of the gnuplot prompt.


On Wed, Aug 25, 2010 at 11:26 AM, Bjorn caveman.bj...@gmail.com wrote:
 Jimmy,
 Thanks for the help!

 I hope I did it right (still new to UNIX and the likes).

 I added Debug=1 at the end of the URL and reloaded the webpage.
 (still the error of course)
 In the P01070 folder, more files had been created:
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230-ms.gp
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230-ms.png.MS.spec
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230-ms.png.MS.spec2
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230-ms.png.MS.spec3
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.gp
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.png.spec
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.png.spec2
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.png.spec3
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.png.spec5
 ...

 I cd to the P010170 folder in terminal and type gnuplot
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.gp
 I get this:
 set terminal png
             ^
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.gp, line 1:
 unknown or ambiguous terminal type; type just 'set terminal' for a
 list

 I checked set terminal and png is not one of the options. How do I
 get the png option?

 Would that be the only reason to the problem?

 Thanks

 Bjorn

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: TPP 4.4.0 on mac os x 10.6.4 problem

2010-08-25 Thread Jimmy Eng
Ignore that message regarding the font (which you wouldn't normally see).

A set of Windows fonts are listed to be used if available for the text
in the plots; otherwise whatever default system font gets used for
non-Windows systems.

On Wed, Aug 25, 2010 at 12:56 PM, Bjorn caveman.bj...@gmail.com wrote:
 AHA!!

 I removed the old link and installed gnuplot with sudo port install
 gnuplot.
 then, sudo ln -s /opt/local/bin/gnuplot /usr/bin/gnuplot

 I checked my TPP and I have graphs!!

 Then, I cd to the P010170 folder in terminal and type gnuplot
 100805.LC2.IT2.XX.P01070_2-A.10_01_805.1366991230.gp
 I get this:

 Could not find/open font when opening font arial, using internal non-
 scalable font

 Any idea what that could be? (it's not so important, the images in TPP
 are working, that's all i need.
 I'll redo a whole analysis and see if everything is ok.)

 Thanks all!!

 bjorn

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] No input spectra met the acceptance criteria

2010-07-20 Thread Jimmy Eng
Tom, possible there are no ms/ms scans in this raw file?

On Mon, Jul 19, 2010 at 2:01 PM, tommyg chunkeymonkeylo...@gmail.com wrote:
 Hello,

 I'm getting a No input spectra met the acceptance criteria error
 while running the latest version of X!Tandem on a 64-bit Intel(R)
 Core(TM)2 Duo CPU     E6550  @ 2.33GHz running Fedora 11.

 I've successfully run many other .pkl files, but for this
 particular .RAW file (D2.raw), I was not able to convert it to a .pkl.
 I was able to convert it to D2.mzXML or D2.mzML file using TPP/
 msconvert, but still getting the error when running XTandem on these
 files. Additionally, I cannot convert the mzXML to pkl (or any other
 format) using MzXML2Search.

 I was able to convert other .raw files to mzXML, and then to pkl, and
 I was able to run these files successfully on XTandem. So this is
 specific to this one particular D2.raw/D2.mzXML file.It appears that
 there are spectra. Any ideas?
 I can upload the mzXML file if it would help.

 Thanks,
 Tom

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] mzML to mgf, ms2....

2010-07-07 Thread Jimmy Eng
fwiw, MzXML2Search does support mzML input files too.

On Wed, Jul 7, 2010 at 11:24 AM, Jesse J firstblackphan...@gmail.com wrote:
 mzML is the new format that should be used, but I haven't seen any
 programs out there that will convert mzML into mgf and ms2, like
 MzXML2Search does with mzXML.

 With some help from my professor, I've expanded ms-msrun to support
 mzML and thus be able to convert mzML into mgf or ms2, but I'm
 curious: is there any other program out there that does this?

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: TPP parsing error during Mascot Search result conversion to pepXML

2010-07-02 Thread Jimmy Eng
Ron,

Here's a compiled Mascot2XML.exe binary for you:

   http://tinyurl.com/24a8jna

Regarding the errors you're seeing yourself, I'd just type 'make' in
the 'src' directory in your msys/mingw window.  I don't know how to
target building just a specific binary but hopefully another developer
can chime in with instructions if possible.

- Jimmy

On Fri, Jul 2, 2010 at 10:13 AM, Ronald W. Finnegan r...@mail.nih.gov wrote:
 I downloaded the MascotConverter.cxx rev 4945 checked in by Jimmy on
 4-15-10 and copied it to this location within the TPP source
 distribution:  C:\TPP431src\tpp\src\Parsers\Algorithm2XML\Mascot2XML\

 I am getting these errors when I run the makefile command:

 lnt...@nimh-lntvm1 /c/tpp431src/tpp/src
 $ makefile mascot2xml
 ./makefile: line 52: /c/tpp431src/tpp/src/: is a directory
 ./makefile: line 53: include: command not found
 ./makefile: line 56: FASTAR_DIR: command not found
 ./makefile: line 57: syntax error near unexpected token `${LGPL},1'
 ./makefile: line 57: `ifeq (${LGPL},1) '

 can someone please provide a suggestion as to how i might get this to
 work?

 thanks

 Ron

 On Jul 1, 8:36 pm, Brian Pratt brian.pr...@insilicos.com wrote:
 Have a look at 
 this:http://tools.proteomecenter.org/wiki/index.php?title=TPP:Building_Win...

 On Thu, Jul 1, 2010 at 3:00 PM, Ronald W. Finnegan r...@mail.nih.gov wrote:



  can you recommend a compiler that i can use to run makefile on a 64
  bit windows server?

  On Jul 1, 1:42 pm, Brian Pratt brian.pr...@insilicos.com wrote:
  You'll need to get the entire source tree from
  /sashimi/trunk/trans_proteomic_pipeline/ down, them cd to
  /sashimi/trunk/trans_proteomic_pipeline/src and make _Mascot2XML.

  On Thu, Jul 1, 2010 at 9:58 AM, Ronald W. Finnegan r...@mail.nih.gov 
  wrote:

   I have downloaded the MascotConverter.cxx source file.

   How do I compile the MascotConverter file so that the TPP source file
   dependencies are met?

   On Jul 1, 12:35 pm, Jimmy Eng jke...@gmail.com wrote:
   http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteom...

This just points you to the Mascot2XML source files (which are
dependent on other source files in the TPP project).

On Thu, Jul 1, 2010 at 6:22 AM, Ronald W. Finnegan 
r...@mail.nih.gov wrote:

 Jimmy,

 I am using the Mascot2XML from the last TPP version released
 9-9-2009.  Can you please point me to the trunk where I can download
 rev 4945 from your April 15, 2010 check-in?

 thanks

 Ron

 On Jun 30, 7:02 pm, Jimmy Eng jke...@gmail.com wrote:
 I have faint memories of dealing with this but I'm away from work 
 so
 can't confirm.  This might've been fixed in rev 4945 check-in on 
 April
 15th; my check-in log message for Mascot2XML was parsing update 
 for
 entry with protein n-term modification which doesn't seem to be
 handled by current code.

 Are you able to compile Mascot2XML from trunk and see if the 
 problem
 goes away?  Otherwise, I'll follow-up with this next week when I'm
 back at work.

 On Tue, Jun 29, 2010 at 5:59 AM, Ronald W. Finnegan 
 r...@mail.nih.gov wrote:

  Can anyone tell me if this bug reported back in April 2009 has 
  been
  fixed:

  Thermo Orbitrap RAW file is inputted into tpp - convert to 
  mzxml -
  convert to mgf - run mascot - convert to pepXML

  The problem is as follows:

  When I use the mascot default modification Acetyl (protein 
  N-term),
  the TPP converter that converts the mascot output .DAT file to 
  pepXML
  format creates the following header in the pepXML file:

  aminoacid_modification aminoacid=  mass=42.010559
  massdiff=42.010559 peptide_terminus=n variable=Y/
  aminoacid_modification aminoacid=N mass=156.053486
  massdiff=42.010559 peptide_terminus=n variable=Y/
  aminoacid_modification aminoacid=- mass=42.010559
  massdiff=42.010559 peptide_terminus=n variable=Y/
  aminoacid_modification aminoacid=t mass=42.010559
  massdiff=42.010559 peptide_terminus=n variable=Y/
  aminoacid_modification aminoacid=e mass=42.010559
  massdiff=42.010559 peptide_terminus=n variable=Y/
  aminoacid_modification aminoacid=r mass=42.010559
  massdiff=42.010559 peptide_terminus=n variable=Y/
  aminoacid_modification aminoacid=m mass=42.010559
  massdiff=42.010559 peptide_terminus=n variable=Y/

  If you look at this header closely, the entries are not only
  meaningless, but there is something funny to be seen if you look 
  at
  the aminoacid= entries:

  aminoacid= 
  aminoacid=N
  aminoacid=-
  aminoacid=t           =  N - term
  aminoacid=e
  aminoacid=r
  aminoacid=m

  as far as I know the aminoacid property should indicate aminoacid
  letters and not be spelling out N-term. This is obviously a 
  parsing
  error occurring somewhere during the conversion.

  Here

Re: [spctools-discuss] Re: Format of Decoy Databases

2010-07-01 Thread Jimmy Eng
yes, those are the right steps

On Thu, Jul 1, 2010 at 12:04 AM, Simon Michnowicz
simon.michnow...@gmail.com wrote:
 Jimmy,
 thanks. So just to clarify, the process should be:
 -take an existing sequence database
 -create duplicate entries with reverse or randomized Amino Acid
 sequences. Modify the Accession identifiers with TAGIdentifier
 -specify TAG when calling xinteract with the -dtag option

 With Mascot, we have to be format the reverse/random entries to have
 the same FASTA header line format, so that Mascot Regex's can parse
 them along with the original entries..

 regards
 Simon


 On Jun 30, 12:37 pm, Jimmy Eng jke...@gmail.com wrote:
 You can use any prefix as long as it's consistent (and obviously
 unique and not present in the target database entries).  In the tools,
 you will specify what prefix denotes the decoy entries.  Like REV or
 ### or ###REV### ...

 On Tue, Jun 29, 2010 at 7:23 PM, Simon Michnowicz



 simon.michnow...@gmail.com wrote:
  Dear Group,
  I am trying to create a decoy database for use in the TPP pipeline.
  I downloaded a Perl  Script from Matrix Science, and noticed it put
  entries like :
  ###REV###
  ###RND###
  in Accession identifiers. However whilst looking at ISB slides noticed
  that the Accession identifiers
  are mentioned to be of the form “REV0” or “REV1”.‘REV’

  Is placing  REV' at the start of the protein name  the standard way
  of formatting Accession identifiers for use with the ISB tools (i.e.
  protein prophet)?

  thanks
  Simon Michnowicz

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to 
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] error message from PeptideProphet

2010-06-29 Thread Jimmy Eng
I would be a little concerned with my files were transferred as xml
after mascot searching was done.  What generated the xml files and in
what format are they?  I believe that Mascot exported pep.xml files
won't run through the TPP correctly (but I could be wrong).

The only advice I can give you is to follow the following steps to get
Mascot data analyze through the TPP:
- take your data and convert to mzXML
- generate mgf files from that mzXML
- search the mgf files through Mascot
- go bring back mascot search results in .dat format
- rename the .dat files to the same base name as mzXML
- convert the .dat to pep.xml
- use those pep.xml files as input to the PeptideProphet

Any deviations from the above steps will only cause problems.

On Mon, Jun 28, 2010 at 6:43 AM, chunchao zc...@hotmail.com wrote:
 It always give me an error message when I run PeptideProphet. My files
 were transferred as xml after mascot searching was done. The error
 message are as follows. Cound somebody tell me what's going on? Thank
 you in advance!

 # Commands for session HMHX2UYYT on Wed May 26 12:03:21 2010
 # BEGIN COMMAND BLOCK
 ## BEGIN Command Execution ##
 [Wed May 26 12:03:21 2010] EXECUTING: run_in c:/Inetpub/wwwroot/ISB/
 data; c:\Inetpub\tpp-bin\xinteract  -Ninteract.pep.xml -p0.05 -l7 -O
 c:/Inetpub/wwwroot/ISB/data/C1_trypsin_1.xml c:/Inetpub/wwwroot/ISB/
 data/C1_trypsin_2.xml c:/Inetpub/wwwroot/ISB/data/C1_trypsin_3.xml c:/
 Inetpub/wwwroot/ISB/data/C2_trypsin_1.xml
 OUTPUT:

 c:\Inetpub\tpp-bin\xinteract (TPP v4.3 JETSTREAM rev 0, Build
 200908071234 (MinGW))

 running: C:/Inetpub/tpp-bin/InteractParser interact.pep.xml c:/
 Inetpub/wwwroot/ISB/data/C1_trypsin_1.xml c:/Inetpub/wwwroot/ISB/
 data/C1_trypsin_2.xml c:/Inetpub/wwwroot/ISB/data/C1_trypsin_3.xml
 c:/Inetpub/wwwroot/ISB/data/C2_trypsin_1.xml -L7
  file 1: c:/Inetpub/wwwroot/ISB/data/C1_trypsin_1.xml
  file 2: c:/Inetpub/wwwroot/ISB/data/C1_trypsin_2.xml
  file 3: c:/Inetpub/wwwroot/ISB/data/C1_trypsin_3.xml
  file 4: c:/Inetpub/wwwroot/ISB/data/C2_trypsin_1.xml
  processed altogether 8267 results


  results written to file c:/Inetpub/wwwroot/ISB/data/interact.pep.xml

  direct your browser to http://localhost/ISB/data/interact.pep.shtml



 command completed in 18 sec

 running: C:/Inetpub/tpp-bin/PeptideProphetParser interact.pep.xml
 MINPROB=0.05
  (MASCOT)
 error: -1.0 homology score

 command C:/Inetpub/tpp-bin/PeptideProphetParser interact.pep.xml
 MINPROB=0.05 failed: Operation not permitted

 command C:/Inetpub/tpp-bin/PeptideProphetParser interact.pep.xml
 MINPROB=0.05 exited with non-zero exit code: 1
 QUIT - the job is incomplete

 command c:\Inetpub\tpp-bin\xinteract -Ninteract.pep.xml -p0.05 -l7 -O
 c:/Inetpub/wwwroot/ISB/data/C1_trypsin_1.xml c:/Inetpub/wwwroot/ISB/
 data/C1_trypsin_2.xml c:/Inetpub/wwwroot/ISB/data/C1_trypsin_3.xml c:/
 Inetpub/wwwroot/ISB/data/C2_trypsin_1.xml failed: Operation not
 permitted
 END OUTPUT
 RETURN CODE:256
 ## End Command Execution ##
 # All finished at Wed May 26 12:03:42 2010
 # END COMMAND BLOCK

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Question on Mascot.dat file and database.fasta file

2010-06-29 Thread Jimmy Eng
There's no surprise that generating mgf files using other tools can
cause Mascot to perform (much) better as those tools apply things like
peak picking which MzXML2Search doesn't do.  MzXML2Search pretty much
just takes the input spectral data and writes it back out into the
chosen output format.

Anyways, what you describe below includes two issues:  lower number of
identifications and max ions cutoff.  There's no real near term
solution to directly address the first issue.  That fix would entail
some developer to spend time implementing a peak picking routine in
the tool that's validated to work well with Mascot.  The second issue
can be mitigated by using the '-Nnum' option in MzXML2Search.  That
command line option specifies the maximum peak count to export for any
given spectrum.  Use a command like the following:

   MzXML2Search -mgf -N100 input.mzXML

This will cause only to 100 most intense m/z values for each spectrum
to be printed out.  I just use 100 as an example.  Because this does
reduce the peak count, it will have some affect on the resulting
Mascot identifications.  And I'm sure there has to be some peak count
value that will give you optimal number of identifications; whether or
not that number of identifications approaches what you get by PLGS
data export is unknown though.

If you're motivated to do so, I would suggest that you generate
SSM411.mgf  using various peak counts (50, 100, 150, 200, 400, etc.)
and run them through Mascot to see which gives the most
identifications and see if the results approach the PLGS results.

- Jimmy

On Tue, Jun 29, 2010 at 4:12 PM, Jing Wang fayfay9...@gmail.com wrote:
 Hi Brian, Jimmy, David,

 Thanks for all the suggestions!

 I did all you suggested: generated .mgf file, re-searched by Mascot, renamed
 .dat file..., and they all worked for the file I have been trying
 (SSM411, as Jimmy pointed out) so far. But, when I tried the different mzXML
 files (since SSM411 is just one of the fractions), I was stuck on Mascot
 searching. I have tried another 4 different mzXML files, they all gave me
 similar error massages:

 Max number of ions is 1. Ignoring ms-ms set starting at line 139813
 [M00031]
 Your search is continuing...
 Warning:
 .(similar warnings with different line numbers)
 ...
 ...
 Max number of ions is 1. Ignoring ms-ms set starting at line 154038
 [M00031]
 Your search is continuing...
 Warning:

 Your search is continuing...

 Sorry, your search could not be performed due to the following mistake
 entering data.
 Missing ion intensity value on line 3857088 of input file [M00430]
 Please press the back button on your browser, correct the fault and retry
 the search.

 Another problem is although the SSM411.mgf file worked on Mascot search, the
 results is a bit different from what I got earlier searched by pkl files
 generated by PLGS (Waters). The result from .mgf (converted by MzXML2Search)
 gives 33 identified proteins, and the one from .pkl (converted by PLGS)
 gives 39 identified proteins. The result from .mgf also gives fewer number
 of peptide matches above identity threshold compared to result from .pkl
 file (104 vs. 135).
 I also tried to generate .pkl file by MzXML2Search command just for the
 curiosities. It gives the very similar result compared to the search from
 .mgf file. The only differece is peptide matches above identity threshold
 with 103 instead of 104. The search from .pkl file (converted by PLGS)
 didn't give any warning message during the searching process, while the .mgf
 and .pkl (converted by MzXML2Search) gave the similar warning message as
 follows:

 Max number of ions is 1. Ignoring ms-ms set starting at line 275395
 [M00031]
 Your search is continuing...

 ... (similar warnings with different line numbers)

 ...

 Your search is continuing...
 Finished uploading search details and file...
 Searching
 Warning:
 Error 31 has been detected 26 times and only the first 10 messages have been
 output [M00999]
 Your search is continuing...

 .20% complete
 ..50% complete

 Any suggestions for fixing?

 Thanks in advance,

 Jing







 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Libra condition file for TMT6 labels?

2010-06-10 Thread Jimmy Eng
here's one generated using default settings in the Libra condition
file generator that's available in the TPP GUI interface:
http://bit.ly/c7dsrO


On Thu, Jun 10, 2010 at 5:56 AM, hillan...@verizon.net
hillan...@verizon.net wrote:
 List,

 Can anyone point me to a Libra condition.xml file for TMT6plex
 labels?  The TPP 4.3.0 release notes seem to indicate that such a file
 might exist, but after searching spctools-discuss archives and the TPP
 4.3.0 src/Quantitation/Libra directory I failed to find it.

 Thanks for any pointers,
 Andrew.



 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: MzXML2Search problem (Agilent msconvert-ed data)

2010-06-10 Thread Jimmy Eng
I just changed to a really big value.  (9 was considered big many
years ago!)

On Thu, Jun 10, 2010 at 9:49 AM, Matthew Chambers
matthew.chamb...@vanderbilt.edu wrote:
 Unless there are indeed downstream problems, it would be good to change that
 default upper scan number since others are running into the problem and it's
 a pretty obscure source of error.

 Dave, the peak filtering has been in msconvert for several months, so
 presumably users can now convert straight from MassHunter to MGF with
 msconvert. The remaining issue is if one wanted to convert to mzXML instead
 with both MS1 and MS2 scans in the file. If one only wanted to filter MS2s
 (like you certainly would want to do if you kept MS1s in profile) then
 that's not currently possible. It's a bit of a command-line syntax quagmire
 that we need to address in our tools.

 -Matt

 On 11/17/2009 12:52 PM, Brian Pratt wrote:

 Actually it will write 6 digit scan numbers as is.  The formatting just
 specifies that it won't write any fewer than 5 digits.
 I think the only code change I would want to make is to set that default
 upper limit to the largest possible integer value, but as there's a
 workaround I don't think we need to worry about it.
 And yeah, who knows what happens downstream.  My intuition is that it
 won't be a problem, actually, but that's only a guess.
 Brian

 On Tue, Nov 17, 2009 at 8:44 AM, Dave Trudgian dct...@ccmp.ox.ac.uk
 mailto:dct...@ccmp.ox.ac.uk wrote:


    Brian,

    I've just had a second look and found the reason the conversion
    stops at
    scan 9:

    The default upper scan number in the options struct is set to 9:

    MzXML2Search.cxx:117
    iLastScan = 9;

    Then just before the main loop it's used to set the value used to
    terminate the main loop:

    MzXML2Search.cxx:344
     if (iAnalysisLastScan  options.iLastScan)
           iAnalysisLastScan = options.iLastScan;


    If the option default is changed to a higher value then the
    outputMGF(..) function would need to be changed to write 5 digit scan
    numbers. Don't know whether this would have an effect downstream :-(

    If you would still like the mzML file as an example let me know
    where I
    can upload it. It's 1.5GB gzipped.

    Cheers,

    DT


    Brian Pratt wrote:
     Returning to the behavior of mzXML2Search, just eyeballing the
    code I don't see any reason it should fail at scan numbers 
    9.  Perhaps the problem is actually downstream from
    mzXML2Search?  While mzXML2Search should quite happily emit 6
    digit MGF scan numbers, I can imagine a consumer of MGF might not
    see that coming.
    
     Perhaps you could furnish an example of the msconvert output
    that's giving you trouble?  Eyeballing the code only gets one so
    far, it's ideal to see it actually running in the debugger.
    
     Brian Pratt
    
     On Tue, Nov 17, 2009 at 6:33 AM, Dave Trudgian
    dct...@ccmp.ox.ac.uk
    mailto:dct...@ccmp.ox.ac.ukmailto:dct...@ccmp.ox.ac.uk
    mailto:dct...@ccmp.ox.ac.uk wrote:
    
     Matt,
    
     We just apply a (low) absolute threshold. Different values for
    different
     instruments, but it's most critical on the Agilent and Water's
    QTOFs, as
     without any threshold there are 1000s of peaks.
    
     Look forward to seeing the filtering in msconvert.
    
     DT
    
    
     Matt Chambers wrote:
     What kind of thresholding do you do? Enabling that in msconvert is
     overdue - the backend code to support it is already in place.
    
     MzXML2Search is failing because it depends on a strict DTA name
    scheme
     with 5 digits. This is going to break with long LTQ Velos runs
    so it
     needs to be fixed regardless of the Agilent scanId issue. It's
    ironic
     that a Thermo file is the one to break the Thermo-centric
    assumptions. :P
    
     -Matt
    
    
     dctrud wrote:
     MzXML2Search conversion to mgf fails for me on mzXML / mzML
    created
     using msconvert from Agilent 6520 QTOF data.
    
     Trapper numbers spectra using an index starting at 1, whilst
    msconvert
     uses the Agilent scan ID (can be a very large number).
    MzXML2Search
     conversion of the resulting file into mgf then fails once it
    reaches a
     scan ID  99,999.
    
     Have seen similar problems with other programs and contacted Matt
     Chambers who said that the numbering would stay the same, and that
     it's better if programs which can't cope with the large
    numbers are
     fixed. Is this possible (desirable?) for MzXML2Search and any
    other
     TPP tools that might be affected?
    
     I can't use Trapper as I need to extract Profile MS + Centroid
    MS/MS
     from a dual mode file (msconvert supported). I can't directly
    convert
     to mgf with msconvert as I need to do peak thresholding to get
    file
     sizes down to a reasonable level.
    
     Cheers,
    
     DT
    
    


 --
 You received this message 

Re: [spctools-discuss] Trouble Running Tandem2XML

2010-05-05 Thread Jimmy Eng
It's not an error but rather a warning.  The warning by itself doesn't
indicate a critical problem (although I suspect you might have other
problems).  Take a look at your pep.xml file and see if it looks like
it has reasonable content in it.

Some things to note:
- your tandem input file should be the Tandem search results xml and
not be an .mzXML file (which is the raw mass spec data)
- use .pep.xml as the extension to the created pepXML file to stick
with the conventions that the TPP tools expect
- ideally your Tandem search used the mzXML file as input otherwise
you might have compatibility issues with tools in the TPP which expect
inputs to follow certain conventions
- ignore last two bullet points if you're not using rest of TPP tools

On Wed, May 5, 2010 at 1:20 PM, Jesse J firstblackphan...@gmail.com wrote:
 I'm trying to run Tandem2XML to convert an X!Tandem output file to
 pepXML, but I get an error every time I try to run it.

 When I run: Tandem2XML /full/path/name/to/tandem_output.mzXML /full/
 path/name/to/test.pepXML
 I get the errors:
 WARNING: Failed to open mzXML file.
         Output will not contain retention times.

 I tried searching for any sort of documentation on Tandem2XML but I
 can't find any, so I can't figure out what the problem even is.

 Notes: If it's of any importance, I'm using Ubuntu and running the
 program under wine. Also, I'm unsure of what the file extension for
 the X!Tandem output is, so I simply gave it the mzXML extension.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Trouble Running Tandem2XML

2010-05-05 Thread Jimmy Eng
oops - I just read your last note.  Name your Tandem searches  with an
extension.xtan.xml or just .xml; using .mzXML for those is asking for
trouble.

On Wed, May 5, 2010 at 2:43 PM, Jimmy Eng jke...@gmail.com wrote:
 It's not an error but rather a warning.  The warning by itself doesn't
 indicate a critical problem (although I suspect you might have other
 problems).  Take a look at your pep.xml file and see if it looks like
 it has reasonable content in it.

 Some things to note:
 - your tandem input file should be the Tandem search results xml and
 not be an .mzXML file (which is the raw mass spec data)
 - use .pep.xml as the extension to the created pepXML file to stick
 with the conventions that the TPP tools expect
 - ideally your Tandem search used the mzXML file as input otherwise
 you might have compatibility issues with tools in the TPP which expect
 inputs to follow certain conventions
 - ignore last two bullet points if you're not using rest of TPP tools

 On Wed, May 5, 2010 at 1:20 PM, Jesse J firstblackphan...@gmail.com wrote:
 I'm trying to run Tandem2XML to convert an X!Tandem output file to
 pepXML, but I get an error every time I try to run it.

 When I run: Tandem2XML /full/path/name/to/tandem_output.mzXML /full/
 path/name/to/test.pepXML
 I get the errors:
 WARNING: Failed to open mzXML file.
         Output will not contain retention times.

 I tried searching for any sort of documentation on Tandem2XML but I
 can't find any, so I can't figure out what the problem even is.

 Notes: If it's of any importance, I'm using Ubuntu and running the
 program under wine. Also, I'm unsure of what the file extension for
 the X!Tandem output is, so I simply gave it the mzXML extension.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.




-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: Trouble Running Tandem2XML

2010-05-05 Thread Jimmy Eng
The Tandem2XML program will try to read each scan's retention time
from the mzXML and add it to the resulting pep.xml.  If not it can't
find the mzXML file, it issues the warning and leaves out the
retention time.

This warning does imply that the standard TPP convention of expected
files and filenames in expected locations isn't being followed.  Not a
horrible thing in itself but there's a decent chance of experiencing
some other problems downstream if this is the case.

On Wed, May 5, 2010 at 3:28 PM, Jesse J firstblackphan...@gmail.com wrote:
 Oh, whoops, silly me. Since it said that it couldn't open the file, I
 assumed it never created the pepXML file. Thank you for those points
 though.

 So then what is it upset about? Is it trying to open the mzXML file in
 addition to the output xml file?

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Software question about ReAdW

2010-04-20 Thread Jimmy Eng
No, ReAdW does not have that functionality nor does any other software
that I'm aware of.

On Tue, Apr 20, 2010 at 11:44 AM,  cfourn...@wesleyan.edu wrote:
 Hi
 Is it possible to use ReAdW from SPC to convert mzXML files back to the
 original .RAW files. If not, does anyone know if this is possible? Thanks
 you!

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: Malformed XML generated by Peptide Prophet

2010-04-18 Thread Jimmy Eng
you can get trunk by subversion checkout:

  svn co 
https://sashimi.svn.sourceforge.net/svnroot/sashimi/trunk/trans_proteomic_pipeline
tpp-trunk

or just visit the URL below and select 'Download GNU tarball':

  
http://sashimi.svn.sourceforge.net/viewvc/sashimi/trunk/trans_proteomic_pipeline/


On Sun, Apr 18, 2010 at 7:49 PM, Simon Michnowicz
simon.michnow...@gmail.com wrote:
 Brian
 that is great news! We have a user who is stumped on this, so I'd like
 to get access to the trunk code.
 I can not see it from http://sourceforge.net/projects/sashimi/files/
 Is there a way to get access to it?

 Thanks
 Simon Michnowicz

 On Apr 17, 2:30 am, Brian Pratt brian.pr...@insilicos.com wrote:
 I actually fixed this back in February, but it's in the current (trunk)
 code and not the 4.3 release.

 Brian

 On Thu, Apr 15, 2010 at 5:34 PM, Simon Michnowicz 





 simon.michnow...@gmail.com wrote:
  Brian
  my apologies, the pepXML file was indeed broken. Mascot2XML took a
  FASTA entry like
  tr|A0N7J4|Vbeta6-N-Dbeta-N-Jbeta1.1 protein (Fragment)  Tax_Id=9606
  [Homo sapiens]

  Into
    search_hit hit_rank=1 peptide=CASSSDR peptide_prev_aa=-
  peptide_next_aa=A protein=tr|A0N7J4|Vbeta6-N-Dbeta-N-
  Jbeta1.1 num_tot_proteins=1 num_matched_ions=3
  tot_num_ions=12 calc_neutral_pep_mass=926.2242 massdiff=-0.3787
  num_tol_term=2 num_missed_cleavages=0 calc_pI=5.83
  is_rejected=0

  Which breaks the pepXML viewer, and I imagine xinteract.

  We have a version of Mascot2XML from TPP v4.3. Is there a more recent
  version that might have a fix?

  thanks
  Simon Michnowicz

  On Apr 16, 1:09 am, Brian Pratt brian.pr...@insilicos.com wrote:
   The defect is really somewhere upstream - whatever created the pepxml
  files
   that went into xinteract should have escaped those characters.

   On Wed, Apr 14, 2010 at 11:25 PM, Simon Michnowicz 

   simon.michnow...@gmail.com wrote:
Dear Group,
I am running a Peptide Prophet search but xinteract breaks. The error
message is :
Syntax error parsing XML.not well-formed (invalid token)

The offending line (in a peptide-xml file) is

alternative_protein protein=tr|Q78E99|MRXRgamma
protein_descr=homolog protein (Fragment)  Tax_Id=10118 [Rattus sp]
num_tol_term=2 peptide_prev_aa=K peptide_next_aa=D/

Should the gamma entry should be ltgammagt ??
Is there an easy fix to this problem?
Thanks
Simon Michnowicz

--
You received this message because you are subscribed to the Google
  Groups
spctools-discuss group.
To post to this group, send email to spctools-discuss@googlegroups.com
  .
To unsubscribe from this group, send email to
spctools-discuss+unsubscr...@googlegroups.comspctools-discuss%2Bunsubscrib
 e...@googlegroups.comspctools-discuss%2Bunsubscrib
  e...@googlegroups.com
.
For more options, visit this group at
   http://groups.google.com/group/spctools-discuss?hl=en.

  --
   You received this message because you are subscribed to the Google Groups
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to
  spctools-discuss+unsubscr...@googlegroups.comspctools-discuss%2Bunsubscrib
   e...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] MS1 quantification improvments?

2010-04-07 Thread Jimmy Eng
Sadly the need for manual inspection isn't going to go away soon but
things will get better with peak picking.  I'll look into adding using
median instead of mean as a user option.

On Wed, Apr 7, 2010 at 12:44 AM, Oded oded.kleif...@gmail.com wrote:
 Hi All,
 Are there any planned improvements or development of new tools for MS1
 quantification. I use Xpress and it work pretty well for me but still
 require a bit of manual inspection. As for ASAPratio I find it to be
 usually off and to require a lot (too much) of manual inspection and
 correction (which is also not too user-friendly but this is a
 different story).
 More specificially, it would be great to have the ability for Xpress
 to calculate the protein ratios following the median of the identified
 peptides ratio and not the mean (as in MaxQuant). I tried to use Excel
 for such calculation after exporting ProteinProphet output (while
 checking the show peptides option) but currently only the mean ratio
 is being exported and not the specific ratio of each peptide within a
 protein.

 Thanks,
 Oded

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] How does X!tandem handle the precursor charge in low resolution data

2010-04-02 Thread Jimmy Eng
When a precursor charge is not known, the logic that is used to
analyze the data is as follows.  If there are no peaks above the
precursor m/z then the spectrum is assumed to be 1+ and analyzed as
such.  If there are peaks above the precursor m/z then the precursor
ion is assumed to be multiply charged.  At this point, although the
precursor charge state could be 2+, 3+, 4+ or higher,  it is searched
as 2+ and 3+.

Note that only correctly presumed charge states can result in 'right'
identifications (obviously).  This also means that there are likely
some 4+ and higher precursor ions that are identified 'wrong' because
that precursor charge is never analyzed.  It's all a tradeoff (in
analysis complexity , search times) and for tryptic peptides it's a
generally reasonable one.

On Thu, Apr 1, 2010 at 4:41 PM, yinyin li liyinyin1...@gmail.com wrote:
 Hello:
 I collected some data from LTQ XP and converted them to mzXML by ReAdw.
 I noticed there is no precursor charge information, unlike the data from LTQ
 Orbitrap.
 Nevertheless, it seems X!tandem can still get it right. So i become very
 curious how it handles the charge.
 Does anyone know?
 Thanks!
 best
 yinyin

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] problem to removing spectra from mzXML and to use indexmzXML afterwards

2010-03-23 Thread Jimmy Eng
Andreas,

I don't believe Brian or any other helpful external person can access
the files when you place it in ISB's ftp site.

Anyways, the ramp parser is frail and finicky and expects an mzXML to
be formatted just the way it expects.  Your modified mzXML has a lot
of formatting changes.  The biggest difference I notice is the base64
encoded peak list normally begins right after the closing peaks
... tag but your new file has a newline and whitespace between the
closing '' and before the base64 starts.

I can see why MzXML2Search might die because of this change but
interestingly readmzXML, which uses same ramp parser, seems fine.  I'm
spending a little time on this just 'cause I find it interesting ...
maybe the files can be posted to a more accessible area if others wish
to look too.

- Jimmy

On Tue, Mar 23, 2010 at 1:57 PM, Andreas Quandt
quandt.andr...@gmail.com wrote:
 hey brian,

 nice to hear from you too and also thanks for picking up on this :-)
 according to xmlwf the modified.mzXML is valid.
 i uploaded both, original.mzXML and modified.mzXML to
 ftp.systemsbiology.net/incoming.
 it would be great if you guys could spare some time to help me with this as
 i am completely clueless about what i am doing wrong.

 cheers,
 andreas

 On Tue, Mar 23, 2010 at 9:25 PM, Brian Pratt brian.pr...@insilicos.com
 wrote:

 Does the file pass general XML validation?  That is, did you possibly
 damage it structurally?

 It's hard to discuss these things without looking at the actual file, can
 you upload to the group's files area?

 Brian

 On Tue, Mar 23, 2010 at 1:20 PM, Andreas Quandt quandt.andr...@gmail.com
 wrote:

 hey luis,

 nice to hear from you and many thanks for your fast answer!
 unfortunately this does not do the trick :-(

 i renumbered the scans starting from 1 (for ms1 and ms2 in ascending
 order),
 ran indexmzXML on the file (successfully) and
 tried 'MzXML2Search -mgf' afterwards.
 and then MzXML2Search fails with an segmentation fault:

 $ /usr/local/apps/tpp/bin/MzXML2Search -mgf  modified.mzXML

 output mode selected: Mascot Generic Format
  MzXML2Search - Mascot Generic Format

  Reading modified.mzXML
  Getting the index offset
  Reading the index
  scan:    34  000%Segmentation fault

 Do you have any further ideas?

 cheers,
 andreas





 On Tue, Mar 23, 2010 at 5:49 PM, Luis Mendoza
 lmend...@systemsbiology.org wrote:

 Hi Andreas,
 You may need to re-number the indices so that the scans start at 1
 have no gaps between subsequent ones, as well as adjust the total scan
 count in the header of the file.  Then apply the re-indexing.

 Hope this helps,
 --Luis



 On Tue, Mar 23, 2010 at 9:46 AM, Andreas Quandt
 quandt.andr...@gmail.com wrote:
  dear list,
 
  i modified some of my mzXML files by removing ms2 spectra which did
  not
  fulfill certain criteria.
  afterwards i tried to analyze them via xtandem but got error messages
  like
  might be a corrupted file.
  to overcome this problem i used indexmzXML which corrected the index
  by
  generating a new mzXML file.
  unfortunately this did not solve the problem as neither xtandem nor
  mzxml2search where accepting the modified mzXML files as correct
  input.
 
  hence, i was wondering if there is either a better way to remove my
  ms2
  scans from the file or if i am using indexmzXML not in a proper way?
 
  cheers,
  andreas
 
  --
  You received this message because you are subscribed to the Google
  Groups
  spctools-discuss group.
  To post to this group, send email to
  spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/spctools-discuss?hl=en.
 

 --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to 

Re: [spctools-discuss] problem to removing spectra from mzXML and to use indexmzXML afterwards

2010-03-23 Thread Jimmy Eng
Andreas,

When I modified your file get rid of all whitespace between the
peaks tags, the MzXML2Search program subsequently ran fine.

change:
   peaks precision=32 byteOrder=network pairOrder=m/z-int
  Q0BLmUBez4BDSRBhQQDRN0NN...w==
  /peaks

to:
   peaks precision=32 byteOrder=network
pairOrder=m/z-intQ0BLmUBez4BDSRBhQQDRN0NN...w==/peaks

and then re-index.


On Tue, Mar 23, 2010 at 2:29 PM, Jimmy Eng jke...@gmail.com wrote:
 Andreas,

 I don't believe Brian or any other helpful external person can access
 the files when you place it in ISB's ftp site.

 Anyways, the ramp parser is frail and finicky and expects an mzXML to
 be formatted just the way it expects.  Your modified mzXML has a lot
 of formatting changes.  The biggest difference I notice is the base64
 encoded peak list normally begins right after the closing peaks
 ... tag but your new file has a newline and whitespace between the
 closing '' and before the base64 starts.

 I can see why MzXML2Search might die because of this change but
 interestingly readmzXML, which uses same ramp parser, seems fine.  I'm
 spending a little time on this just 'cause I find it interesting ...
 maybe the files can be posted to a more accessible area if others wish
 to look too.

 - Jimmy

 On Tue, Mar 23, 2010 at 1:57 PM, Andreas Quandt
 quandt.andr...@gmail.com wrote:
 hey brian,

 nice to hear from you too and also thanks for picking up on this :-)
 according to xmlwf the modified.mzXML is valid.
 i uploaded both, original.mzXML and modified.mzXML to
 ftp.systemsbiology.net/incoming.
 it would be great if you guys could spare some time to help me with this as
 i am completely clueless about what i am doing wrong.

 cheers,
 andreas

 On Tue, Mar 23, 2010 at 9:25 PM, Brian Pratt brian.pr...@insilicos.com
 wrote:

 Does the file pass general XML validation?  That is, did you possibly
 damage it structurally?

 It's hard to discuss these things without looking at the actual file, can
 you upload to the group's files area?

 Brian

 On Tue, Mar 23, 2010 at 1:20 PM, Andreas Quandt quandt.andr...@gmail.com
 wrote:

 hey luis,

 nice to hear from you and many thanks for your fast answer!
 unfortunately this does not do the trick :-(

 i renumbered the scans starting from 1 (for ms1 and ms2 in ascending
 order),
 ran indexmzXML on the file (successfully) and
 tried 'MzXML2Search -mgf' afterwards.
 and then MzXML2Search fails with an segmentation fault:

 $ /usr/local/apps/tpp/bin/MzXML2Search -mgf  modified.mzXML

 output mode selected: Mascot Generic Format
  MzXML2Search - Mascot Generic Format

  Reading modified.mzXML
  Getting the index offset
  Reading the index
  scan:    34  000%Segmentation fault

 Do you have any further ideas?

 cheers,
 andreas





 On Tue, Mar 23, 2010 at 5:49 PM, Luis Mendoza
 lmend...@systemsbiology.org wrote:

 Hi Andreas,
 You may need to re-number the indices so that the scans start at 1
 have no gaps between subsequent ones, as well as adjust the total scan
 count in the header of the file.  Then apply the re-indexing.

 Hope this helps,
 --Luis



 On Tue, Mar 23, 2010 at 9:46 AM, Andreas Quandt
 quandt.andr...@gmail.com wrote:
  dear list,
 
  i modified some of my mzXML files by removing ms2 spectra which did
  not
  fulfill certain criteria.
  afterwards i tried to analyze them via xtandem but got error messages
  like
  might be a corrupted file.
  to overcome this problem i used indexmzXML which corrected the index
  by
  generating a new mzXML file.
  unfortunately this did not solve the problem as neither xtandem nor
  mzxml2search where accepting the modified mzXML files as correct
  input.
 
  hence, i was wondering if there is either a better way to remove my
  ms2
  scans from the file or if i am using indexmzXML not in a proper way?
 
  cheers,
  andreas
 
  --
  You received this message because you are subscribed to the Google
  Groups
  spctools-discuss group.
  To post to this group, send email to
  spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/spctools-discuss?hl=en.
 

 --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you

Re: [spctools-discuss] question to pep_dbcount and digestdb

2010-03-15 Thread Jimmy Eng
Here are the output columns.  I just updated the source files in to
have the usage statement print these out for digestdb and make it
clearer for pep_dbcount.

digestdb output columns:
- peptide length
- protein reference
- peptide mass
- previous amino acid before peptide
- peptide sequence
- next amino acid after peptide
- peptide start location
- peptide end location
- pI

pep_dbcount output columns:
- peptide
- # proteins peptide appears in
- mass
- comma separated protein list

In order to use pep_dbcount in its intended application, you need to
feed it a peptide sorted output from digestdb.  Something like
   digestdb somefile.fasta | sort -k 5,5  digest.output
   pep_dbcount digest_output


On Mon, Mar 15, 2010 at 2:58 PM, Andreas Quandt
quandt.andr...@gmail.com wrote:
 dear list,

 i would like to use pep_dbcount and digestdb for some analysis but i am not
 sure about the values of the output files.
 hence, it would be great if one of you could shortly explain me which values
 are displayed there.

 many thanks in advance,
 andreas

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] xtandem search didn't run through Petunia

2010-03-10 Thread Jimmy Eng
I suggest you start debugging the problem one step at a time.
First confirm that your Tandem search actually ran by looking at
contents of the .tandem file.
Next, check contents of tandem.pep.xml.
Do both of these files look like they have peptide IDs in them?



On Wed, Mar 10, 2010 at 2:04 PM, Lik Wee Lee l...@systemsbiology.org wrote:
 Furthermore, I also did:
 C:\Inetpub\wwwroot\ISB\data\17mixc:\Inetpub\tpp-bin\Tandem2XML
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.pep.xml

 but if I tried to browse file in petunia and click on [PepXML],
 it opens a new window:
 http://localhost/tpp-bin/PepXMLViewer.cgi?xmlFileName=c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.pep.xml

 but with the message:

 error:
 can't open html
 templatec:/Inetpub/wwwroot;c:/Inetpub/wwwroot/ISB/html/PepXMLViewer.html

 I also had problem with xinteract:

 C:\Inetpub\wwwroot\ISB\data\17mix\tandemc:\Inetpub\tpp-bin\xinteract
 -Ninteract.pep.xml -p0.05 -l7 -O
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.pep.xml

 c:\Inetpub\tpp-bin\xinteract (TPP v4.3 JETSTREAM rev 1, Build 200909091257
 (MinGW))

 running: C:/Inetpub/tpp-bin/InteractParser interact.pep.xml
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tand
 em.pep.xml -L7

 command C:/Inetpub/tpp-bin/InteractParser interact.pep.xml
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tande
 m.pep.xml -L7 failed: Unknown error

 command C:/Inetpub/tpp-bin/InteractParser interact.pep.xml
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tande
 m.pep.xml -L7 exited with non-zero exit code: -1
 QUIT - the job is incomplete



 Is this an installation issue? Any ideas?

 Thanks,
 Lik Wee

 On Wed, Mar 10, 2010 at 1:53 PM, Lik Wee Lee l...@systemsbiology.org
 wrote:

 Hi Brian,

 The full path still gives command ... failed: unknown error.

 I tried run_in c: and didn't get any output.

 But if I try run_in c:\; dir, I get:
 command dir failed: Unknown error.

 Could it be I previously had installed Perl 5.10? I uninstalled it
 and then installed Perl 5.8.9 followed by tpp 4.3.1.

 Lik Wee

 On Wed, Mar 10, 2010 at 1:34 PM, Brian Pratt brian.pr...@insilicos.com
 wrote:

 run_in is just a little program that runs one or more commands in the
 indicated directory.  For example run_in c:\foo; bar.exe; baz.exe performs
 cd c:\foo then runs bar.exe then baz.exe.  It exists to help with
 linux-oriented multipart command lines in the GUI, statements like cd
 c:foo; bar.exe; baz.exe (we just swap run_in for cd on the windows
 installation).

 Anyway, that's what it's for - why it doesn't work for you is a mystery.
 Perhaps it's a path issue: how does the command line
 c:\Inetpub\tpp-bin\run_in c:/Inetpub/wwwroot/ISB/data/17mix/tandem;
 c:\Inetpub\tpp-bin\tandem
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.params

 work for you (that is, with full path to run_in specified)?

 On Wed, Mar 10, 2010 at 1:21 PM, Lik Wee Lee l...@systemsbiology.org
 wrote:

 Hi,

 When I tried to do a xtandem search in the TPP petunia interface,
 I encountered the problem:

 # Commands for session TYBEEDEHS on Wed Mar 10 13:13:38 2010
 # BEGIN COMMAND BLOCK
 ## BEGIN Command Execution ##

 [Wed Mar 10 13:13:38 2010] EXECUTING: run_in
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem; c:\Inetpub\tpp-bin\tandem
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.params
 OUTPUT:

 command c:\Inetpub\tpp-bin\tandem
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.params
 failed: Unknown error




 END OUTPUT
 RETURN CODE:65280
 ## End Command Execution ##
 # All finished at Wed Mar 10 13:13:38 2010
 # END COMMAND BLOCK

 Does anyone have any ideas why this occur?
 However, the search was performed successfully if I did it by command
 line:

 c:\Inetpub\tpp-bin\tandem
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.params

 If I try

 run_in c:/Inetpub/wwwroot/ISB/data/17mix/tandem;
 c:\Inetpub\tpp-bin\tandem
 c:/Inetpub/wwwroot/ISB/data/17mix/tandem/OR20091211_18mix_01.tandem.params

 it says command failed: Unknown error.

 I have ActivePerl 5.8.9 build 827 and TPP version 4.3.1 installed on
 Windows XP SP3.

 Thanks,
 Lik Wee

 --
 You received this message because you are subscribed to the Google
 Groups spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 

Re: [spctools-discuss] Problem with ReAdW

2010-03-09 Thread Jimmy Eng
Correct for ReAdW right now.  You can use ProteoWizard's msconvert
tool in combination with Thermo's free version of their MSFileReader
though.

On Mon, Mar 8, 2010 at 11:37 PM, Amit Yadav amit007thech...@gmail.com wrote:
 Is there no way to convert RAW files without using XCalibur SDK ?
 Regards,

 Amit Kumar Yadav
 Senior Research Fellow (SRF-CSIR)
 IGIB, New Delhi (India)

 http://masswiz.igib.res.in




 On Sat, Feb 27, 2010 at 4:22 AM, Omoruyi Osula oosu...@gmail.com wrote:

 It worked. Thank you.
 On 2/25/, Jimmy Eng jke...@gmail.com wrote:
  You're calling it wrong (old style).  Type 'ReAdW' without any other
  arguments to find the current usage statement.  Assuming you want to
  centroid, compress (which you should), invoke it as
 
     ReAdW --centroid --compress --mzXML filename.RAW
 
 
  On Thu, Feb 25, 2010 at 8:54 AM, oosula oosu...@gmail.com wrote:
  I'm having difficulty converting my .RAW file to mzXML using the ReAdW
  program. After downloading the program along with all the necessary
  files ( zlib.dll, .RAW file, etc), I then went to the command prompt,
  found my directory which contained the ReAdW and the file of interest,
  and from there tried to do the conversion. I basically typed ReAdW
  filename.RAW c after the path name based on some online directions I
  found, and it just keeps taking me back to instructions that can be
  found in the README text that comes with the program. In other words,
  it won't convert the file at all. Can anyone help me??
 
  --
  You received this message because you are subscribed to the Google
  Groups
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/spctools-discuss?hl=en.
 
 
 
  --
  You received this message because you are subscribed to the Google
  Groups
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/spctools-discuss?hl=en.
 
 

 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/spctools-discuss?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: Proteome Discoverer

2010-03-04 Thread Jimmy Eng
You should search your Proteome Discoverer box for a file named
'sequest.params' as I don't have access to the Thermo version.  In
fact, I wouldn't even know if sequest.params still exists under
Discoverer.

Here's the current params file we use at the UW (which is similar to
but not directly the same as Thermo's version of this file).  Example
modifications include a +16 variable mod on methionine and a static
mod of +57 on cysteines:
   http://proteomicsresource.washington.edu/sequest_params.php

What we really need is someone to contribute a .msf to pep.xml converter!

On Thu, Mar 4, 2010 at 3:49 PM, Ping yanpp...@gmail.com wrote:
 Thanks Jimmy!

 Is there any sequest.params sample with modifications I can get?

 Thanks again!

 Ping

 On Mar 3, 5:16 pm, Jimmy Eng jke...@gmail.com wrote:
 unless there's a way to convert .msf to pep.xml, and I'm not aware of
 any tool that does this, you'll have to go the .out route.

 On Mar 3, 2:18 pm, Ping yanpp...@gmail.com wrote:

  Hi All,

  The output of the Proteome Discover is *.msf. Is there an easy way to
  compute peptide prophet from it? Or I have to run sequest.exe to get
  *.out to do so?

  Thanks!

  Ping

 --
 You received this message because you are subscribed to the Google Groups 
 spctools-discuss group.
 To post to this group, send email to spctools-disc...@googlegroups.com.
 To unsubscribe from this group, send email to 
 spctools-discuss+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/spctools-discuss?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



Re: [spctools-discuss] Re: Problem mit den pepXML Dateien

2010-03-03 Thread Jimmy Eng
I'm not familiar with that error message but possibly your search
didn't run?  The .out files should have scores and peptide sequences
in them and not be the same format as the .dta peak lists.  If they
are the same, something is wrong.

On Wed, Mar 3, 2010 at 7:08 AM, BIackEye philippschre...@web.de wrote:
 My .out Data looks just like the .dta I posted.
 But I found a sequest.txt file inside my .out DIR with the following
 lines:

 SEQUEST v.28, (c) 1998-2009

 Failed to initialize FLEXnetWrapper

 On 25 Feb., 17:29, Jimmy Eng jke...@gmail.com wrote:
 There is no date/version information within the dta peak lists file.
 The real question is what do your Sequest search result .out files
 look like that were used as input to generate the pep.xml?

 On Thu, Feb 25, 2010 at 4:22 AM, BIackEye philippschre...@web.de wrote:
  I'm sry,

  didnt realize i was writing in german into an english forum.
  Anyway, maybe there is sth wrong with my dta, cause there is absolutly
  no stamps (Date, Version etc.) on the top.
  I read in other posts, that the pepXML converter is pretty static
  about the files he accepts for dta's.
  Maybe there is sth buggy with my Sequest search engine.

  with best regards
  Blacky

  On 22 Feb., 18:13, Jimmy Eng jke...@gmail.com wrote:
  Die pepXML Akte ist zu klein. Es gibt keine Suchresultate in der Akte.
  Fokus auf, wie diese Akte erzeugt wurde.

  (I thought it would be cute to reply in German.  The above is a babel
  fish translation of: The pepXML file is too small.  There are no
  search results in the file.  Focus on how that file was generated.)

  On Mon, Feb 22, 2010 at 8:05 AM, BIackEye philippschre...@web.de wrote:
   Ich habe ein Problem mit den pepXML Dateien. Irgendwie sehen die zu
   klein aus, und die Analyse startet auch nicht.
   Hier mal der Fehlerbericht:
   -
   # Commands for session ZBTP4364S on Mon Feb 22 16:57:49 2010
   # BEGIN COMMAND BLOCK
   ## BEGIN Command Execution ##
   [Mon Feb 22 16:57:49 2010] EXECUTING: run_in c:/Inetpub/wwwroot/ISB/
   data; c:\Inetpub\tpp-bin\xinteract  -Ninteract.pep.xml -p0.05 -l7 -O
   c:/Inetpub/wwwroot/ISB/data/0010-100208NiLeGb01TD.pep.xml
   OUTPUT:

   c:\Inetpub\tpp-bin\xinteract (TPP v4.3 JETSTREAM rev 0, Build
   200908071234 (MinGW))

   running: C:/Inetpub/tpp-bin/InteractParser interact.pep.xml c:/
   Inetpub/wwwroot/ISB/data/0010-100208NiLeGb01TD.pep.xml -L7
    file 1: c:/Inetpub/wwwroot/ISB/data/0010-100208NiLeGb01TD.pep.xml
    processed altogether 0 results

    results written to file c:/Inetpub/wwwroot/ISB/data/interact.pep.xml

    direct your browser tohttp://localhost/ISB/data/interact.pep.shtml

   command completed in 1 sec

   running: C:/Inetpub/tpp-bin/PeptideProphetParser interact.pep.xml
   MINPROB=0.05
    (SEQUEST) (icat)
   MS Instrument info: Manufacturer: Thermo Scientific, Model: LTQ
   Orbitrap Discovery, Ionization: NSI, Analyzer: FTMS, Detector: unknown

   command C:/Inetpub/tpp-bin/PeptideProphetParser interact.pep.xml
   MINPROB=0.05 failed: Unknown error

   command C:/Inetpub/tpp-bin/PeptideProphetParser interact.pep.xml
   MINPROB=0.05 exited with non-zero exit code: 255
   QUIT - the job is incomplete

   command c:\Inetpub\tpp-bin\xinteract -Ninteract.pep.xml -p0.05 -l7 -O
   c:/Inetpub/wwwroot/ISB/data/0010-100208NiLeGb01TD.pep.xml failed:
   Unknown error
   END OUTPUT
   RETURN CODE:65280
   ## End Command Execution ##
   # All finished at Mon Feb 22 16:57:52 2010
   # END COMMAND BLOCK
   ---
   Hier meine pep,pep.xml und eine tga Datei:
   [url=http://uploading.com/files/
   b12b2ac4/0010-100208NiLeGb01TD.pep.xml/] Download
   0010-100208NiLeGb01TD.pep.xml for free on uploading.com[/url]
   [url=http://uploading.com/files/
   fm325f39/0010-100208NiLeGb01TD.pep.xml/] Download
   0010-100208NiLeGb01TD.pep.xml for free on uploading.com[/url]
   [url=http://uploading.com/files/61md3e1d/0010-100208NiLeGb01TD.
   00331.00331.3.dta/] Download 0010-100208NiLeGb01TD.00331.00331.3.dta
   for free on uploading.com[/url]

   --
   You received this message because you are subscribed to the Google 
   Groups spctools-discuss group.
   To post to this group, send email to spctools-disc...@googlegroups.com.
   To unsubscribe from this group, send email to 
   spctools-discuss+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/spctools-discuss?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  spctools-discuss group.
  To post to this group, send email to spctools-disc...@googlegroups.com.
  To unsubscribe from this group, send email to 
  spctools-discuss+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/spctools-discuss?hl=en.

 --
 You received this message

[spctools-discuss] Re: Proteome Discoverer

2010-03-03 Thread Jimmy Eng
unless there's a way to convert .msf to pep.xml, and I'm not aware of
any tool that does this, you'll have to go the .out route.

On Mar 3, 2:18 pm, Ping yanpp...@gmail.com wrote:
 Hi All,

 The output of the Proteome Discover is *.msf. Is there an easy way to
 compute peptide prophet from it? Or I have to run sequest.exe to get
 *.out to do so?

 Thanks!

 Ping

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



[spctools-discuss] Re: peptide identification from ms1

2010-03-03 Thread Jimmy Eng
Look up accurate mass and time tags, peptide mass fingerprinting, and
maybe even a general proteomics data analysis review such as doi:
10.1038/nrm1468

I hope you don't take this too negatively but the scope of what one
might consider ms1 data is too general (a peptide mass fingerprint
spectra, and entire LC/MS run, etc.) and the questions you're asking
implies that you need more education on the topic before even
considering doing data analysis.

On Feb 24, 7:40 am, Fan Zhang fanzha...@gmail.com wrote:
 Hi There,

 I have only Ms level 1 data. Can we identify peptide from ms1 data?

 If we can, which tool to use?

 Thanks.

 Van

-- 
You received this message because you are subscribed to the Google Groups 
spctools-discuss group.
To post to this group, send email to spctools-disc...@googlegroups.com.
To unsubscribe from this group, send email to 
spctools-discuss+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/spctools-discuss?hl=en.



<    1   2   3   >