Re: [DISCUSS] Moving to Git (was Re: Moving repository to git)

2015-08-21 Thread Sheryl John
Cool! +1

On Fri, Aug 21, 2015 at 1:17 PM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 you got it!

 though you can use Review Board whenever you want, but yeah Pull
 Requests would be fine.

 At the same time - we can do Pull Request reviews with SVN,
 like for example see apache/oodt on Github.

 —
 Chris Mattmann
 chris.mattm...@gmail.com






 -Original Message-
 From: Sheryl John shery...@gmail.com
 Reply-To: dev@oodt.apache.org
 Date: Friday, August 21, 2015 at 12:58 PM
 To: dev@oodt.apache.org dev@oodt.apache.org
 Subject: Re: [DISCUSS] Moving to Git (was Re: Moving repository to git)

 +1
 
 So instead of review board, will it be pull requests reviews from Github?
 
 On Fri, Aug 21, 2015 at 12:50 PM, Mattmann, Chris A (3980) 
 chris.a.mattm...@jpl.nasa.gov wrote:
 
  I would be fully supportive of this.
 
  What do others in the community think? If there are no objections
  over the next week I’d like to call a VOTE.
 
  Note, Radu, that this wouldn’t mean “moving to Github”. Github !=
  Git != Apache. Apache has writeable Git repos (that run on ASF
  hardware) and separately, Git mirrors that mirror and interact with
  Github from the Apache gold source. Everyone should check out:
  http://git.apache.org for more detail.
 
  Thoughts, please?
 
  Cheers,
  Chris
 
  ++
  Chris Mattmann, Ph.D.
  Chief Architect
  Instrument Software and Science Data Systems Section (398)
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 168-519, Mailstop: 168-527
  Email: chris.a.mattm...@nasa.gov
  WWW:  http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Associate Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
 
 
  -Original Message-
  From: Radu Manole manole.v.r...@gmail.com
  Reply-To: dev@oodt.apache.org dev@oodt.apache.org
  Date: Friday, August 21, 2015 at 11:51 AM
  To: oodt dev@oodt.apache.org
  Subject: Moving repository to git
 
  Hello,
  
  Did you consider moving to git from svn as the main repository for the
  project?
  Especially since most of the pull requests are done from github,
  converting
  a diff to svn feels a bit unnecessary.
  
  It woul have made my life easier certainly. Especially if I didn't
 have to
  convert from git to svn diffs in order to post reviews on review board.
  
  Thanks,
  Radu.
 
 
 
 
 --
 -Sheryl





-- 
-Sheryl


Re: [DISCUSS] Moving to Git (was Re: Moving repository to git)

2015-08-21 Thread Sheryl John
+1

So instead of review board, will it be pull requests reviews from Github?

On Fri, Aug 21, 2015 at 12:50 PM, Mattmann, Chris A (3980) 
chris.a.mattm...@jpl.nasa.gov wrote:

 I would be fully supportive of this.

 What do others in the community think? If there are no objections
 over the next week I’d like to call a VOTE.

 Note, Radu, that this wouldn’t mean “moving to Github”. Github !=
 Git != Apache. Apache has writeable Git repos (that run on ASF
 hardware) and separately, Git mirrors that mirror and interact with
 Github from the Apache gold source. Everyone should check out:
 http://git.apache.org for more detail.

 Thoughts, please?

 Cheers,
 Chris

 ++
 Chris Mattmann, Ph.D.
 Chief Architect
 Instrument Software and Science Data Systems Section (398)
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 168-519, Mailstop: 168-527
 Email: chris.a.mattm...@nasa.gov
 WWW:  http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Associate Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++





 -Original Message-
 From: Radu Manole manole.v.r...@gmail.com
 Reply-To: dev@oodt.apache.org dev@oodt.apache.org
 Date: Friday, August 21, 2015 at 11:51 AM
 To: oodt dev@oodt.apache.org
 Subject: Moving repository to git

 Hello,
 
 Did you consider moving to git from svn as the main repository for the
 project?
 Especially since most of the pull requests are done from github,
 converting
 a diff to svn feels a bit unnecessary.
 
 It woul have made my life easier certainly. Especially if I didn't have to
 convert from git to svn diffs in order to post reviews on review board.
 
 Thanks,
 Radu.




-- 
-Sheryl


Re: Git(hub) Mirrors are oudated?

2013-08-29 Thread Sheryl John
Hi,

The other branches have later commits [1]. The 'trunk' has commits from 4
months ago..But, it doesn't have a 0.6 branch or tag so it still seems
behind SVN.

[1] https://github.com/apache/oodt/branches


On Thu, Aug 29, 2013 at 4:33 PM, Michael Joyce jo...@apache.org wrote:

 I noticed that the Last Updated dates of the files in the OODT Github
 Mirror [1] all read 3 Years Ago. Is the github/git mirror this far behind
 our SVN?

 [1] https://github.com/apache/oodt

 -- Joyce




-- 
-Sheryl


Re: Requested Read-only Git Mirror for Apache OODT

2013-01-12 Thread Sheryl John
Cool!..Thanks!


On Sat, Jan 12, 2013 at 10:04 AM, Mattmann, Chris A (388J) 
chris.a.mattm...@jpl.nasa.gov wrote:

 Hey Guys,

 FYI https://issues.apache.org/jira/browse/INFRA-5759

 I requested a GitHub read-only Git mirror for Apache OODT. Yay!

 I'm not proposing we move to Git for our CM system :) I just wanted folks
 on Github to be able to send us pull requests with improvements if they
 have any.

 Cheers,
 Chris




-- 
-Sheryl


Re: CSS issue with JIRA?

2012-08-10 Thread Sheryl John
Yep it does look like the css files are missing. Didn't get the screenshot.

On Thu, Aug 9, 2012 at 11:06 PM, Cameron Goodale good...@apache.org wrote:

 Hey OODTers,

 I have been trying to access JIRA and I can, but the css files seem to be
 missing.  Does anyone else see this same problem with:
 https://issues.apache.org/jira/browse/OODT-328 ?


 Screenshot of what I see is attached.

 Thanks.


 -Cameron




-- 
-Sheryl


Re: Problem happened when I tried to run the script crawler_launcher

2012-08-10 Thread Sheryl John
-launcher cli '-actionIds or -ais' option and you can see these
listed in the action-beans.xml. The same applies for the crawler-beans and
the preconditions.

2012/8/10 Sheryl John shery...@gmail.com:
  Hi Yunhee,
 
  What are the error messages you get while running the crawler?
 
  I've faced similar issues with crawler when I tried out the first time
 too.
  I went through the crawler user guide to understand the architecture and
  then understood how it worked only after running crawler with several
 times
  to ingest files.
  I agree we need to update the guide and if you want to know about the
  MetExtractorProductCrawler and AutoDetectProductCrawler, the wiki page
 that
  I mentioned before will give you an idea how to get it working (It
 mentions
  the config files that you need to write for the above two crawlers).
 
 
 
  On Thu, Aug 9, 2012 at 6:27 AM, YunHee Kang yunh.k...@gmail.com wrote:
 
  Hi Chris,
 
  I got a bunch of error messages when running the crawler_launcher
 script.
  First off, I think I need to understand  how to a crawler works.
  Can I get some materials to help me write configuration files for
  crawler_launcher ?
 
  Honestly I am not familiar with Crawler.
  But I will try to file a JIRA issue to update the Crawler user guide.
 
  Thanks,
  Yunhee
 
 
 
  2012/8/9 Mattmann, Chris A (388J) chris.a.mattm...@jpl.nasa.gov:
   Hi YunHee,
  
   Sorry, we need to update the docs, that is for sure. Can you help
   us remember by filing a JIRA issue to update the Crawler user
   guide and to fix the URL there?
  
   As for crawlerId, yes it's obsolete, you can find the modern
   0.4 and 0.5-trunk options by running ./crawler_launcher -h
  
   Cheers,
   Chris
  
   On Aug 7, 2012, at 7:03 AM, YunHee Kang wrote:
  
   Hi Chris and Sheryl,
  
   I understood  my mistake after modifying a wrong URL with the /.
   But there is the wrong  URL  that is used  as an option of
   crawler_launcher in the apache oodt
   homepage(http://oodt.apache.org/components/maven/crawler/user/).
   --filemgrUrl http://localhost:9000/ \
   So it made me confused.
  
   I tried to run the command mentioned below  according to  the home
   page of apache oodt.
   $ ./crawler_launcher --crawlerId MetExtractorProductCrawler
   ERROR: Invalid option: 'crawlerId'
  
   But the error described above  was occurred.
   Is the option 'crawlerid'  obsolete ?
  
   Thanks,
   Yunhee
  
  
   2012/8/7 Mattmann, Chris A (388J) chris.a.mattm...@jpl.nasa.gov:
   Perfect, Sheryl, my thoughts exactly.
  
   Cheers,
   Chris
  
   On Aug 6, 2012, at 10:01 AM, Sheryl John wrote:
  
   Hi Yunhee,
  
   Check out this OODT wiki for crawler :
   https://cwiki.apache.org/confluence/display/OODT/OODT+Crawler+Help
  
   Did you try giving 'http://localhost:8000' without the / in the
  end?
   Also, specify
  'org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory'
   for  'clientTransferer' option.
  
  
   On Mon, Aug 6, 2012 at 9:46 AM, YunHee Kang yunh.k...@gmail.com
  wrote:
  
   Hi Chris,
  
   I got an error message when I tried to run crawler_launcher by
 using
  a
   shell script. The error message may be caused by a  wrong URL of
   filemgr.
   $ ./crawler_launcher.sh
   ERROR: Validation Failures: - Value 'http://localhost:8000/' is
 not
   allowed for option
   [longOption='filemgrUrl',shortOption='fm',description='File
 Manager
   URL'] - Allowed values = [http://.*:\d*]
  
   The following is the shell script that I wrote:
   $ cat crawler_launcher.sh
   #!/bin/sh
   export
 STAGE_AREA=/home/yhkang/oodt-0.5/cas-pushpull/staging/TESL2CO2
   ./crawler_launcher \
-op --launchStdCrawler \
--productPath $STAGE_AREA\
--filemgrUrl http://localhost:8000/\
--failureDir /tmp \
--actionIds DeleteDataFile MoveDataFileToFailureDir Unique \
--metFileExtension tmp \
--clientTransferer
   org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferer
  
   I am wondering if there is a problem in the URL of the filemgr or
  elsewhere
  
   Thanks,
   Yunhee
  
  
  
  
   --
   -Sheryl
  
  
   ++
   Chris Mattmann, Ph.D.
   Senior Computer Scientist
   NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
   Office: 171-266B, Mailstop: 171-246
   Email: chris.a.mattm...@nasa.gov
   WWW:   http://sunset.usc.edu/~mattmann/
   ++
   Adjunct Assistant Professor, Computer Science Department
   University of Southern California, Los Angeles, CA 90089 USA
   ++
  
  
  
   ++
   Chris Mattmann, Ph.D.
   Senior Computer Scientist
   NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
   Office: 171-266B, Mailstop: 171-246
   Email: chris.a.mattm...@nasa.gov
   WWW:   http://sunset.usc.edu/~mattmann

Re: Problem happened when I tried to run the script crawler_launcher

2012-08-09 Thread Sheryl John
Hi Yunhee,

What are the error messages you get while running the crawler?

I've faced similar issues with crawler when I tried out the first time too.
I went through the crawler user guide to understand the architecture and
then understood how it worked only after running crawler with several times
to ingest files.
I agree we need to update the guide and if you want to know about the
MetExtractorProductCrawler and AutoDetectProductCrawler, the wiki page that
I mentioned before will give you an idea how to get it working (It mentions
the config files that you need to write for the above two crawlers).



On Thu, Aug 9, 2012 at 6:27 AM, YunHee Kang yunh.k...@gmail.com wrote:

 Hi Chris,

 I got a bunch of error messages when running the crawler_launcher script.
 First off, I think I need to understand  how to a crawler works.
 Can I get some materials to help me write configuration files for
 crawler_launcher ?

 Honestly I am not familiar with Crawler.
 But I will try to file a JIRA issue to update the Crawler user guide.

 Thanks,
 Yunhee



 2012/8/9 Mattmann, Chris A (388J) chris.a.mattm...@jpl.nasa.gov:
  Hi YunHee,
 
  Sorry, we need to update the docs, that is for sure. Can you help
  us remember by filing a JIRA issue to update the Crawler user
  guide and to fix the URL there?
 
  As for crawlerId, yes it's obsolete, you can find the modern
  0.4 and 0.5-trunk options by running ./crawler_launcher -h
 
  Cheers,
  Chris
 
  On Aug 7, 2012, at 7:03 AM, YunHee Kang wrote:
 
  Hi Chris and Sheryl,
 
  I understood  my mistake after modifying a wrong URL with the /.
  But there is the wrong  URL  that is used  as an option of
  crawler_launcher in the apache oodt
  homepage(http://oodt.apache.org/components/maven/crawler/user/).
  --filemgrUrl http://localhost:9000/ \
  So it made me confused.
 
  I tried to run the command mentioned below  according to  the home
  page of apache oodt.
  $ ./crawler_launcher --crawlerId MetExtractorProductCrawler
  ERROR: Invalid option: 'crawlerId'
 
  But the error described above  was occurred.
  Is the option 'crawlerid'  obsolete ?
 
  Thanks,
  Yunhee
 
 
  2012/8/7 Mattmann, Chris A (388J) chris.a.mattm...@jpl.nasa.gov:
  Perfect, Sheryl, my thoughts exactly.
 
  Cheers,
  Chris
 
  On Aug 6, 2012, at 10:01 AM, Sheryl John wrote:
 
  Hi Yunhee,
 
  Check out this OODT wiki for crawler :
  https://cwiki.apache.org/confluence/display/OODT/OODT+Crawler+Help
 
  Did you try giving 'http://localhost:8000' without the / in the
 end?
  Also, specify
 'org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory'
  for  'clientTransferer' option.
 
 
  On Mon, Aug 6, 2012 at 9:46 AM, YunHee Kang yunh.k...@gmail.com
 wrote:
 
  Hi Chris,
 
  I got an error message when I tried to run crawler_launcher by using
 a
  shell script. The error message may be caused by a  wrong URL of
  filemgr.
  $ ./crawler_launcher.sh
  ERROR: Validation Failures: - Value 'http://localhost:8000/' is not
  allowed for option
  [longOption='filemgrUrl',shortOption='fm',description='File Manager
  URL'] - Allowed values = [http://.*:\d*]
 
  The following is the shell script that I wrote:
  $ cat crawler_launcher.sh
  #!/bin/sh
  export STAGE_AREA=/home/yhkang/oodt-0.5/cas-pushpull/staging/TESL2CO2
  ./crawler_launcher \
   -op --launchStdCrawler \
   --productPath $STAGE_AREA\
   --filemgrUrl http://localhost:8000/\
   --failureDir /tmp \
   --actionIds DeleteDataFile MoveDataFileToFailureDir Unique \
   --metFileExtension tmp \
   --clientTransferer
  org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferer
 
  I am wondering if there is a problem in the URL of the filemgr or
 elsewhere
 
  Thanks,
  Yunhee
 
 
 
 
  --
  -Sheryl
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 




-- 
-Sheryl


Re: Problem happened when I tried to run the script crawler_launcher

2012-08-07 Thread Sheryl John
Hi Yunhee,

I'm sorry for the confusion caused due to the guide in the oodt crawler
homepage. The new command line options were introduced recently and hence,
the option 'crawlerId' is obsolete now. It's now replaced by
(--launchStdCrawler or -stdPC).
If you run ./crawler_launcher -h, you should see the new cli options menu
added by Brian.



On Tue, Aug 7, 2012 at 7:03 AM, YunHee Kang yunh.k...@gmail.com wrote:

 Hi Chris and Sheryl,

 I understood  my mistake after modifying a wrong URL with the /.
 But there is the wrong  URL  that is used  as an option of
 crawler_launcher in the apache oodt
 homepage(http://oodt.apache.org/components/maven/crawler/user/).
  --filemgrUrl http://localhost:9000/ \
 So it made me confused.

 I tried to run the command mentioned below  according to  the home
 page of apache oodt.
 $ ./crawler_launcher --crawlerId MetExtractorProductCrawler
 ERROR: Invalid option: 'crawlerId'

 But the error described above  was occurred.
 Is the option 'crawlerid'  obsolete ?

 Thanks,
 Yunhee


 2012/8/7 Mattmann, Chris A (388J) chris.a.mattm...@jpl.nasa.gov:
  Perfect, Sheryl, my thoughts exactly.
 
  Cheers,
  Chris
 
  On Aug 6, 2012, at 10:01 AM, Sheryl John wrote:
 
  Hi Yunhee,
 
  Check out this OODT wiki for crawler :
  https://cwiki.apache.org/confluence/display/OODT/OODT+Crawler+Help
 
  Did you try giving 'http://localhost:8000' without the / in the end?
  Also, specify
 'org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory'
  for  'clientTransferer' option.
 
 
  On Mon, Aug 6, 2012 at 9:46 AM, YunHee Kang yunh.k...@gmail.com
 wrote:
 
  Hi Chris,
 
  I got an error message when I tried to run crawler_launcher by using a
  shell script. The error message may be caused by a  wrong URL of
  filemgr.
  $ ./crawler_launcher.sh
  ERROR: Validation Failures: - Value 'http://localhost:8000/' is not
  allowed for option
  [longOption='filemgrUrl',shortOption='fm',description='File Manager
  URL'] - Allowed values = [http://.*:\d*]
 
  The following is the shell script that I wrote:
  $ cat crawler_launcher.sh
  #!/bin/sh
  export STAGE_AREA=/home/yhkang/oodt-0.5/cas-pushpull/staging/TESL2CO2
  ./crawler_launcher \
-op --launchStdCrawler \
--productPath $STAGE_AREA\
--filemgrUrl http://localhost:8000/\
--failureDir /tmp \
--actionIds DeleteDataFile MoveDataFileToFailureDir Unique \
--metFileExtension tmp \
--clientTransferer
  org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferer
 
  I am wondering if there is a problem in the URL of the filemgr or
 elsewhere
 
  Thanks,
  Yunhee
 
 
 
 
  --
  -Sheryl
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 




-- 
-Sheryl


Re: Review Request: Synchronous and Asynchronous LocalEngineRunnerFactory

2012-08-07 Thread Sheryl John


 On Aug. 7, 2012, 5:52 p.m., brian Foster wrote:
  i've got this fix in https://reviews.apache.org/r/6407/ ... i'll check it 
  in today

Awesome! Got the latest commit. Thanks!


- Sheryl


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6416/#review9976
---


On Aug. 7, 2012, 3:36 a.m., Sheryl John wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/6416/
 ---
 
 (Updated Aug. 7, 2012, 3:36 a.m.)
 
 
 Review request for oodt, Chris Mattmann, brian Foster, Ricky Nguyen, Paul 
 Ramirez, and Thomas Bennett.
 
 
 Description
 ---
 
 TestXmlRpcWorkflowManager JUnit was not passing. When I tried running the 
 wmgr (./wmgr) the following exception was thrown:
 Exception in thread main java.lang.ClassCastException: 
 org.apache.oodt.cas.workflow.engine.SynchronousLocalEngineRunner cannot be 
 cast to org.apache.oodt.cas.workflow.engine.EngineRunnerFactory
   at 
 org.apache.oodt.cas.workflow.util.GenericWorkflowObjectFactory.getEngineRunnerFromClassName(GenericWorkflowObjectFactory.java:82)
   at 
 org.apache.oodt.cas.workflow.system.XmlRpcWorkflowManager.getEngineRunnerFromProperty(XmlRpcWorkflowManager.java:636)
   at 
 org.apache.oodt.cas.workflow.system.XmlRpcWorkflowManager.init(XmlRpcWorkflowManager.java:96)
   at 
 org.apache.oodt.cas.workflow.system.XmlRpcWorkflowManager.main(XmlRpcWorkflowManager.java:609)
 
 
 This addresses bug OODT-310.
 https://issues.apache.org/jira/browse/OODT-310
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/oodt/trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/engine/AsynchronousLocalEngineRunnerFactory.java
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/engine/SynchronousLocalEngineRunner.java
  1370102 
   
 http://svn.apache.org/repos/asf/oodt/trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/engine/SynchronousLocalEngineRunnerFactory.java
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/system/XmlRpcWorkflowManager.java
  1370102 
 
 Diff: https://reviews.apache.org/r/6416/diff/
 
 
 Testing
 ---
 
 TestXmlRpcWorkflowManager JUnit test pass. Also able to kick off wmgr with no 
 exceptions.
 
 
 Thanks,
 
 Sheryl John
 




Re: Problem happened when I tried to run the script crawler_launcher

2012-08-06 Thread Sheryl John
Hi Yunhee,

Check out this OODT wiki for crawler :
https://cwiki.apache.org/confluence/display/OODT/OODT+Crawler+Help

Did you try giving 'http://localhost:8000' without the / in the end?
Also, specify 
'org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory'
for  'clientTransferer' option.


On Mon, Aug 6, 2012 at 9:46 AM, YunHee Kang yunh.k...@gmail.com wrote:

 Hi Chris,

 I got an error message when I tried to run crawler_launcher by using a
 shell script. The error message may be caused by a  wrong URL of
 filemgr.
  $ ./crawler_launcher.sh
 ERROR: Validation Failures: - Value 'http://localhost:8000/' is not
 allowed for option
 [longOption='filemgrUrl',shortOption='fm',description='File Manager
 URL'] - Allowed values = [http://.*:\d*]

 The following is the shell script that I wrote:
 $ cat crawler_launcher.sh
 #!/bin/sh
 export STAGE_AREA=/home/yhkang/oodt-0.5/cas-pushpull/staging/TESL2CO2
 ./crawler_launcher \
-op --launchStdCrawler \
--productPath $STAGE_AREA\
--filemgrUrl http://localhost:8000/\
--failureDir /tmp \
--actionIds DeleteDataFile MoveDataFileToFailureDir Unique \
--metFileExtension tmp \
--clientTransferer
 org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferer

 I am wondering if there is a problem in the URL of the filemgr or elsewhere

 Thanks,
 Yunhee




-- 
-Sheryl


Re: [jira] [Commented] (OODT-310) Port WEngine to trunk

2012-08-06 Thread Sheryl John
Thanks Cameron!
It worked after changing the url to http.

On Mon, Aug 6, 2012 at 7:43 PM, Cameron Goodale sigep...@gmail.com wrote:

 Sheryl,

 I get the same error on another project using review board. The issue is
 caused by filling out too much or too little of the base repository url.

 If/when you get the right combination to work, make note of it or grab a
 screen shot. It'll help keep your sanity.

 Cameron
 On Aug 6, 2012 11:34 AM, Brian Foster (JIRA) j...@apache.org wrote:

 
  [
 
 https://issues.apache.org/jira/browse/OODT-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13429325#comment-13429325
 ]
 
  Brian Foster commented on OODT-310:
  ---
 
  EngineRunnerFactory additions fixed in r1369914
 
   Port WEngine to trunk
   -
  
   Key: OODT-310
   URL: https://issues.apache.org/jira/browse/OODT-310
   Project: OODT
Issue Type: Sub-task
Components: workflow manager
  Reporter: Chris A. Mattmann
  Assignee: Chris A. Mattmann
   Fix For: 0.5
  
   Attachments: OODT-310.2012-08-03.bfoster.patch.txt,
  OODT-310.Mattmann.082311.patch.txt, OODT-310.Mattmann.100911.patch.txt,
  OODT-310.sherylj.101711.patch.txt
  
  
   This issue will track the porting of the wengine-branch WorkflowEngine
  interface and associated classes to trunk. This involves changing to the
  Processor model (Sequential, Condition, etc.) which has already begun (as
  of OODT-70), and also the actual Engine implementation itself, that
 doesn't
  block and that includes a queue-based model developed by [~bfoster].
 
  --
  This message is automatically generated by JIRA.
  If you think it was sent incorrectly, please contact your JIRA
  administrators:
  https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
  For more information on JIRA, see:
 http://www.atlassian.com/software/jira
 
 
 




-- 
-Sheryl


Re: Introducing myself

2012-08-04 Thread Sheryl John
Hi Mike!
Welcome to the community! :)

On Fri, Aug 3, 2012 at 11:53 PM, Michael Joyce mltjo...@gmail.com wrote:

 Hi all,

 I'm a new intern in Chris/Cameron/Pauls' section at JPL. Just wanted to
 send a quick email to introduce myself and say hello.

 Look forward to helping out with the project!
 
 Mike




-- 
-Sheryl


Re: [ANNOUNCE] Welcome Michael Cayanan as Apache OODT PMC + committer

2012-07-06 Thread Sheryl John
Welcome aboard, Mike! Bring on the OODT spirit! :)

On Fri, Jul 6, 2012 at 10:02 AM, Cayanan, Michael D (388J) 
michael.d.caya...@jpl.nasa.gov wrote:

 Hi all,

 I've been a Software Engineer at JPL for almost 11 years now and have been
 fortunate to have gotten a chance to work with some great people like
 Chris Mattmann, Paul Ramirez, and Cameron Goodale.
 It's great that we're building the OODT community and my hope is that
 we'll continue to build on this, help each other out and work together so
 that we can show that OODT rocks! :D

 -Mike

 On 7/3/12 1:57 PM, Mattmann, Chris A (388J)
 chris.a.mattm...@jpl.nasa.gov wrote:

 Hey Folks,
 
 The Apache OODT PMC has elected to add Michael Cayanan to our ranks as a
 PMC
 member and committer. Welcome Mike!
 
 Feel free to say a bit about yourself.
 
 Cheers,
 Chris
 
 ++
 Chris Mattmann, Ph.D.
 Senior Computer Scientist
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 171-266B, Mailstop: 171-246
 Email: chris.a.mattm...@nasa.gov
 WWW:   http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Assistant Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++
 




-- 
-Sheryl


Re: OODT SoCal Meetup/Hackathon

2012-05-25 Thread Sheryl John
Out of town on the 9th weekend.

On Fri, May 25, 2012 at 8:28 AM, Paul Vee paul@gmail.com wrote:

 Can't do 9th

 Sent from my iPhone

 On May 25, 2012, at 8:21 AM, Ramirez, Paul M (388J) 
 paul.m.rami...@jpl.nasa.gov wrote:

  Hey Guys,
 
  So thoughts on June 16th or June 9th? Chris the prospect is your house so
  either of those days work for you?
 
  --Paul
 
  On 5/21/12 10:03 AM, Mattmann, Chris A (388J)
  chris.a.mattm...@jpl.nasa.gov wrote:
 
  Hey Tom,
 
  Awesome, well we will plan on having an OODT meetup in Novemeber that's
  for sure
  and I again volunteer my house :)
 
  Cheers,
  Chris
 
  On May 21, 2012, at 6:27 AM, Thomas Bennett wrote:
 
  Hey,
 
  Totally awesome idea. Wish I could be there to join in ;). If this
  becomes
  a more regular event - keep me posted. I'll be in the US in November -
  it
  would be great way to meet the team.
 
  Cheers,
  Tom
 
  On 18 May 2012 07:16, Cameron Goodale good...@apache.org wrote:
 
  I think the pool/house idea sounds great.  If we start early enough,
  then
  we can hack for like 4 - 6 hours, then use the pool as a reward for a
  job
  well done.
 
  My hope is that we can get some long stretches of time to work on
 OODT,
  without email, twitter, IM, or facebook distractions...well the
  exception
  being the Google Hangout (of course).
 
  -Cam
 
  On Thu, May 17, 2012 at 7:06 PM, Ramirez, Paul M (388J) 
  paul.m.rami...@jpl.nasa.gov wrote:
 
  Your place sounds good to me. Now we just need to lock down a date.
  1st
  or
   2nd Saturday of June work?
 
  +1 to Google hangout.
 
  Probably best if people do a SVN checkout before we meetup.
 
  --Paul
 
  On May 17, 2012, at 10:45 AM, Mattmann, Chris A (388J) 
  chris.a.mattm...@jpl.nasa.gov wrote:
 
  Now *that* is a great idea bfost. We can set up a Google Hangout for
  this.
 
  OK, then I am going to throw out my pool/house for the meetup
  location,
  depending on what people think!
 
  Cheers,
  Chris
 
  On May 16, 2012, at 11:32 PM, Brian Foster wrote:
 
  google+ hangout 4 those out of state!
 
  On May 16, 2012, at 11:54 AM, Mattmann, Chris A (388J) 
  chris.a.mattm...@jpl.nasa.gov wrote:
 
  Hey Paul,
 
  This sounds like a great idea and I'm in (of course!)
 
  One thing we'll need to do is make sure that we do a really good
  job
  of trying
  to include everyone, even folks that are not in the SoCal area.
  That
  means
  potentially using the Wiki, and other forms of communication
 (JIRA,
  ReviewBoard, etc.)
  to make sure everyone (including those not able to attend) feels
  included.
 
  Great idea!
 
  Cheers,
  Chris
 
  On May 15, 2012, at 2:21 PM, Ramirez, Paul M (388J) wrote:
 
  Hi All,
 
  Anyone interested in doing a meetup/hackathon at the beginning of
  June. The goal would be to get some updates to our code base but
  really
  focused on getting 0.4 out the door. If there is enough interest we
  would
  need a place to host it. Anyone and everyone would be welcome as it
  would
  be a good way to understand the ways to contribute which go well
  beyond
  just code. If enough people seem interested  I will work out the
  details.
  My initial thought is this would be on a weekend.
 
  --Paul
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 




-- 
-Sheryl


Re: OODT SoCal Meetup/Hackathon

2012-05-15 Thread Sheryl John
Weekend sounds great. Count me in.

On May 15, 2012, at 5:21 PM, Ramirez, Paul M (388J) 
paul.m.rami...@jpl.nasa.gov wrote:

 Hi All,
 
 Anyone interested in doing a meetup/hackathon at the beginning of June. The 
 goal would be to get some updates to our code base but really focused on 
 getting 0.4 out the door. If there is enough interest we would need a place 
 to host it. Anyone and everyone would be welcome as it would be a good way to 
 understand the ways to contribute which go well beyond just code. If enough 
 people seem interested  I will work out the details. My initial thought is 
 this would be on a weekend. 
 
 --Paul


Re: Resource Manager client question

2012-05-07 Thread Sheryl John
Hi Mike,

Yup, you can run your python scripts, java programs etc. from CAS-PGE which
is used with the Workflow Manager. Check out this cas-pge guide [1] and the
other wiki pages related to workflow.

You can use Resource Manager to run tasks sent from the Workflow Manager.
I've recently started testing this but there are others on the list who can
guide you more on the Resource Manager.

HTH!

Sheryl

[1] https://cwiki.apache.org/OODT/cas-pge-learn-by-example.html


On Mon, May 7, 2012 at 3:43 PM, Iwunze, Michael C (GSFC-4700)[NOAA-JPSS] 
michael.iwu...@nasa.gov wrote:


 I have two questions, I am able to run the Resource Manager with no
 issues. I have some python scripts and possibly some other programs  I
 would like to run using the Resource Manager. From what I know so far I
 believe the cas-pge component needs to be used in conjunction with the
 Resource Manager and is used as a wrapper program for running my scripts.
 Can someone give me more information on how this can be accomplished or are
 there any examples to view?

  I would also like to be able to utilize the Job Scheduler, Monitor and
 Job queue classes that are part of the Resource Manager. I can't find any
 examples of how they are used anywhere. And if examples do exist can
 someone point me in the right direction or give me more information on this?

 Thanks

 Mike




-- 
-Sheryl


Re: Review Request: Wengine Task Querier Thread: OODT-310

2012-05-03 Thread Sheryl John


 On 2012-05-02 19:20:13, brian Foster wrote:
  ./trunk/workflow/src/test/org/apache/oodt/cas/workflow/engine/TestTaskQuerier.java,
   line 79
  https://reviews.apache.org/r/4961/diff/1/?file=105932#file105932line79
 
  When Success/done is passed in, the processor created is incorrect 
  since it has a sub-processor in Queued/waiting
 
 Chris Mattmann wrote:
 Thoughts on how to fix?

If 'anydoneStates'/done is passed, then don't add taskProcessor and return the 
'done' processor?


- Sheryl


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/4961/#review7482
---


On 2012-05-02 05:08:45, Chris Mattmann wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/4961/
 ---
 
 (Updated 2012-05-02 05:08:45)
 
 
 Review request for oodt, brian Foster, Ricky Nguyen, Paul Ramirez, Sheryl 
 John, and Thomas Bennett.
 
 
 Summary
 ---
 
 Task Querier thread for OODT-310. See javadocs on: 
 https://builds.apache.org/job/oodt-trunk/javadoc/org/apache/oodt/cas/workflow/engine/TaskQuerier.html
 
 
 This addresses bug OODT-310.
 https://issues.apache.org/jira/browse/OODT-310
 
 
 Diffs
 -
 
   
 ./trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/engine/TaskQuerier.java
  1332505 
   
 ./trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/engine/WorkflowProcessor.java
  1331866 
   
 ./trunk/workflow/src/test/org/apache/oodt/cas/workflow/engine/TestTaskQuerier.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/4961/diff
 
 
 Testing
 ---
 
 Includes unit test, that currently isn't passing. I think I know why 
 (something up with my threading logic and synchronized keywords) but wanted 
 to throw it up for review. I'll likely be working on this tomorrow or the 
 following evening.
 
 
 Thanks,
 
 Chris
 




Re: Staging CAS-PGE config file...

2012-05-01 Thread Sheryl John
Hi Brian,

I'm still going through the wmgr -2  wengine branch CAS-PGE, but are you
suggesting to always use resource manager to support the configuration?

Actually, we've not yet used resource manager with workflow and CAS-PGE in
the VPICU project (currently in the process of learning and setting it up )
and, so I have little knowledge of how the resource manager can support
this. But if it does support, does that imply that I have to use PGEs and
workflows with resource mgr?

I agree that setting the *ConfigFilePath* to the temporary dir where the
task executes on, within the config itself to generate the object is kind
of confusing and should be given before it runs.

On Tue, May 1, 2012 at 5:20 PM, Brian Foster holeno...@mac.com wrote:

 hey guys,

 in the wengine branched CAS-PGE, it supported staging the CAS-PGE's XML
 config file to tmp directory so it could be parsed and then processed and
 then the staged config file was copied CAS-PGE's working directory (had to
 be copied later since the working directory information is in the config
 file).  I think this should be something the resource manager should
 instead support... staging job binaries and config that is need to run the
 jobs would be a cleaner implementation than what wengine CAS-PGE does...
 CAS-PGE would still stage Products and ingest them itself (that is a
 CAS-PGE specified task), however the knowledge of getting CAS-PGE's
 configuration file which configures it should already be there when it
 runs... otherwise you kinda need configuration for CAS-PGE configuration
 (chicken and egg problem)... what you guys think?

 -brian




-- 
-Sheryl


Re: Registering a custom ProductCrawler with cas-crawler

2012-04-27 Thread Sheryl John
Ah.. interesting
Thanks for sharing Rishi!

On Fri, Apr 27, 2012 at 11:37 AM, Verma, Rishi (388J) 
rishi.ve...@jpl.nasa.gov wrote:

 Hey All,

 Chris and I had an lively discussion over IM, about the topic of whether
 to write a custom crawler or use actionIds/precondId based extension
 points.

 We thought it would be useful to share, so I've made it available on the
 OODT wiki:
 https://cwiki.apache.org/confluence/display/OODT/2012/04/27/Custom+crawling
 +-+when+to+or+when+not+to+write+your+own+ProductCrawler


 Thanks!
 rishi

 On 4/26/12 1:25 PM, Verma, Rishi (388J) rishi.ve...@jpl.nasa.gov
 wrote:

 Per Chris' suggestion, I'm looking at making a custom pre-ingest action or
 pre-ingest comparator instead of creating a full new productcrawler. This
 might be a more light-weight solution.
 
 However, thanks for the tips in any case Brian and Chris!
 
 rishi
 
 On 4/26/12 2:06 AM, Brian Foster holeno...@me.com wrote:
 
 Nevermind... Looks like you are using 0.3 instead of the trunk... what I
 added applies to trunk crawler
 
 -Brian
 
 On Apr 25, 2012, at 4:36 PM, Verma, Rishi (388J)
 rishi.ve...@jpl.nasa.gov wrote:
 
  Hi all,
 
  I wrote a custom cas-crawler ProductCrawler, but I'm having some
 difficulty registering my custom product crawler with cas-crawler.
 
  I created a product crawler by extending StdProductCrawler, and I've
 added this product-crawler name to crawler config files (following the
 example of StdProductCrawler):
  * crawler/policy/crawler-beans.xml
  * crawler/policy/cmd-line-option-beans.xml
 
  However, after running the below command, I can clearly see my custom
 product crawler (called LabCASProductCrawler) is not available. A
 crawler ingest try also tells me that there is no bean by the name of
 my LabCASProductCrawler available:
  bash-3.2$ ./crawler_launcher ‹printSupportedCrawlers
  ProductCrawlers:
   Id: StdProductCrawler
   Id: MetExtractorProductCrawler
   Id: AutoDetectProductCrawler
 
  ./crawler_launcher --crawlerId LabCASProductCrawler --filemgrUrl
 http://localhost:9000 --productPath /data/staging/HGHAGA9 --failureDir
 /tmp/failed_ingest --metFileExtension met ‹clientTransferer
 org.apache.oodt.cas.filemgr.datatransfer.LocalDataTransferFactory
  Failed to parse options : No bean named 'LabCASProductCrawler' is
 defined
 
  I noticed in files like crawler-config.xml and
 cmd-line-option-beans.xml, there were references made to crawler config
 files stored in the cas-crawler JAR. Looking more into this, it seems to
 me that crawler is pre-loading config files directly from that JAR and
 overshadowing any of my config changes:
  *
 crawler/lib/cas-crawler-0.3.jar:org/apache/oodt/cas/crawl/crawler-beans.
 x
 ml
  *
 crawler/lib/cas-crawler-0.3.jar:org/apache/oodt/cas/crawl/crawler-config
 .
 xml
 
  So two questions:
  1. Am I editing the correct policy files, in order to register my
 custom product crawler with cas-crawler?
  2. It seems the cas-crawler JAR contains crawler config files that take
 greater precedence than the ones available for editing under
 crawler/policy. Is there a way around this?
 
  Thanks!
  rishi
 




-- 
-Sheryl


Review Request: Rollback capability for workflows

2012-04-18 Thread Sheryl John

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/4790/
---

Review request for oodt, Chris Mattmann, brian Foster, Ricky Nguyen, Paul 
Ramirez, and Thomas Bennett.


Summary
---

First draft of the 'least dramatic' option suggested by Chris to support 
OODT-212 feature. 
Not sure of other expended resources that should be cleared up here and maybe 
there's a better way of doing this. 


Diffs
-

  
https://svn.apache.org/repos/asf/oodt/trunk/workflow/src/main/java/org/apache/oodt/cas/workflow/structs/RollbackableWorkflowTaskInstance.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/4790/diff


Testing
---

none


Thanks,

Sheryl



Re: workflow task/condition question

2012-04-10 Thread Sheryl John
Hi Ryan,

You can specify properties for your conditions in the conditions.xml.
There's an example with properties for urn:oodt:CheckForMetadataKeys in
the /policy/conditions.xml. Also check out the other examples.
So if you're defining a new condition class, you'll have add that to the
conditions.xml and include properties for that condition.

Is that what you were looking for?



On Tue, Apr 10, 2012 at 12:35 PM, Gerard, Ryan S. (GSFC-586.0)[COLUMBUS
TECHNOLOGIES AND SERVICES INC] ryan.s.ger...@nasa.gov wrote:

 Hello,

 We have a question regarding our workflow tasks. We are configuring our
 tasks.xml file and need to define some properties for our conditions. We
 would like to create a general condition and reuse it for many tasks. Is
 there a way to do this?

 task id=urn:oodt:HelloWorld name=Hello World
class=org.apache.oodt.cas.workflow.examples.HelloWorld
conditions
condition id=urn:oodt:TrueCondition
 IS THERE A WAY TO DEFINE A PROPERTY IN THE
 CONDITION HERE
/
/conditions
configuration
property name=Person value=Chris /
/configuration
/task

 Thanks,
 Ryan Gerard




-- 
-Sheryl


Re: Review Request: OODT-413: filemgr query throws NPE when some products have undefined metadata values

2012-03-15 Thread Sheryl John

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/4374/#review6019
---

Ship it!


Unit test looks good too!

- Sheryl


On 2012-03-16 03:18:23, Ricky Nguyen wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/4374/
 ---
 
 (Updated 2012-03-16 03:18:23)
 
 
 Review request for oodt, Chris Mattmann, brian Foster, Paul Ramirez, Sheryl 
 John, and Thomas Bennett.
 
 
 Summary
 ---
 
 filemgr query throws NPE when some products have undefined metadata values
 
 
 This addresses bug OODT-413.
 https://issues.apache.org/jira/browse/OODT-413
 
 
 Diffs
 -
 
   
 trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/catalog/LuceneCatalog.java
  1301315 
   
 trunk/filemgr/src/test/org/apache/oodt/cas/filemgr/catalog/TestLuceneCatalog.java
  1301315 
 
 Diff: https://reviews.apache.org/r/4374/diff
 
 
 Testing
 ---
 
 tested at CHLA with command:
 
 ./filemgr-client -u $FILEMGR_URL -op -sql -of '$VpsEpisodeStartTime' -q 
 SELECT VpsEpisodeStartTime FROM CernerEvents
 
 
 Thanks,
 
 Ricky
 




Re: [ANNOUNCE] Welcome Bruce Barkstrom as Apache OODT PMC member + committer

2012-03-12 Thread Sheryl John
Welcome aboard Sir!
We're glad to have you on the team. :)

On Mon, Mar 12, 2012 at 10:37 AM, Bruce Barkstrom brbarkst...@gmail.comwrote:

 OK - I'm a retired civil servant with about 25 years with NASA and 2 with
 NOAA.
 I was effectively PI for NASA's Earth Radiation Budget Experiment (ERBE)
 that
 measured the reflected sunlight and emitted terrestrial flux, determining
 that
 clouds cool the current climate.  I did the system architecture for
 the data production
 system that eventually had about 1/4 million lines of FORTRAN.  After
 ten years of
 that activity, including leading a science team with about twenty
 major Co-I's, I became
 a PI (with Bruce Wielicki) on the NASA EOS investigation of Clouds and
 the Earth's
 Radiant Energy System (CERES).  The team was about the same size as ERBE's,
 but the code (for which I was again the system architect) ran to about
 2/3 million
 SLOC.  I spent five years as Head of the LaRC Atmospheric Sciences Data
 Center
 and had participated in all of the major design reviews for the Earth
 Observing System
 Data and Information System (EOSDIS).  At NOAA's National Climatic Data
 Center
 (NCDC), I became the Data Stewardship Project Manager for a couple of
 years.
 Since retiring from the federal government, I've participated heavily
 in the Earth Science
 Information Partner (ESIP) Federation's Working Group on Data Preservation
 (and
 related topics).  That's probably more than enough about me.

 Bruce B.

 On Mon, Mar 12, 2012 at 11:23 AM, Mattmann, Chris A (388J)
 chris.a.mattm...@jpl.nasa.gov wrote:
  Hi All,
 
  We just elected to add Bruce Barkstrom to our ranks as an Apache OODT
 PMC member and committer.
  Welcome Bruce! Feel free to say a bit about yourself.
 
  Cheers,
  Chris
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 




-- 
-Sheryl


Filemgr query not returning list of values

2012-03-09 Thread Sheryl John
Hi,

I'm working with 0.4-SNAPSHOT( not latest build) and when querying the
Filemanager on a multivalue key, it's not returning the expected list of
values but it returned only the first value. I tested the query on the
filemgr-client and the query-tool and both returns only the first value.

I checked the filemgr catalog using the lucene luke tool and I can see all
the expected values for that key, so the key does have more than one
metadata value. I tried the same query on 0.3v and that returns the
concatenated list of values for the key.

Have I missed out something or did anyone else come across this before with
the 0.4-SNAPSHOT? I didn't check the code yet..

Thanks
Sheryl


Re: Adding Apache OODT to CodeMaps

2012-03-01 Thread Sheryl John
Doesn't look like CodeMaps has the search capability like OpenGrok. And, I
agree that the code-search tool would be nice to have too.

On Thu, Mar 1, 2012 at 9:27 AM, Verma, Rishi (388J) 
rishi.ve...@jpl.nasa.gov wrote:

 Hi Sheryl, Chris

 CodeMaps looks nice. Thanks for finding that. Does it have searching
 capability though?

 Come to think of it, a fast code search-engine on the OODT site itself
 would be a good idea.. perhaps using OpenGrok [1] or some equivalent.

 I can file a JIRA for this.

 --
 [1] http://hub.opensolaris.org/bin/view/Project+opengrok/

 On 2/29/12 10:28 PM, Mattmann, Chris A (388J)
 chris.a.mattm...@jpl.nasa.gov wrote:

 +1 to add OODT to CodeMaps -- looks useful...
 
 Cheers,
 Chris
 
 On Feb 29, 2012, at 9:26 PM, Sheryl John wrote:
 
  Hi all,
 
  I just found out about CodeMaps ( http://www.codemaps.org/) and they've
  added Tika.
 
  Shall we add OODT to CodeMaps?
 
  I've asked them what to do to add OODT  and Vineet just replied:
 
  -- Forwarded message --
  From: Vineet Sinha vin...@architexa.com
  Date: Wed, Feb 29, 2012 at 9:21 PM
  Subject: Re: Adding Apache OODT to CodeMaps
  To: shery...@gmail.com
  Cc: Dev @ Architexa d...@architexa.com
 
 
  Sheryl,
  Glad to hear. To add Apache OODT to CodeMaps just login to the site and
  click add project. It will get added to the queue.
 
  If you can promise to spend 30 minutes providing some basic information
 for
  the site - I am sure I can convince my team to get it done first thing
 in
  the morning. :-)
 
  Regards,
  Vineet
  --
  President  CTO, Architexa - www.architexa.com
  Understand  Document Code In Seconds
  vin...@architexa.com :: 617.818.0548
 
 
 
  On Thu, Mar 1, 2012 at 12:18 AM, Sheryl John sher...@usc.edu wrote:
 
  Hi Vineet,
 
  I've just seen CodeMaps and the Tika documentation and looks awesome!
 
  How can we add Apache OODT (http://oodt.apache.org/) to CodeMaps?
 
 
  Thanks!
  --
  Sheryl
 
 
 
 
 
  --
  -Sheryl
 
 
 ++
 Chris Mattmann, Ph.D.
 Senior Computer Scientist
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 171-266B, Mailstop: 171-246
 Email: chris.a.mattm...@nasa.gov
 WWW:   http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Assistant Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++
 




-- 
-Sheryl


Re: Continuing a workflow after restarting the workflow manager

2012-02-28 Thread Sheryl John
Yes, you're right. Sorry for the confusion.
The repo will have all the workflow instance executed so far and you can
even access instances' metadata.  (There's a new command line option for
this in 0.4)
But, as you've seen, once you restart the wmgr it doesn't track the
previous instances.

I think it would be nice to have that functionality in workflow manager
too. I'm not aware of any other way to do this.

On Tue, Feb 28, 2012 at 2:23 PM, Keith Cummings kcumm...@nrao.edu wrote:

 Hi Sheryl.
 I tried using the wmgr-client command line options to pause/resume/stop
 workflow instances as you suggested.  It worked great; thanks for pointing
 me there.  FYI, I'm using v0.3, if that matters.

 As for the repo getting wiped out when the Workflow Manager is restarted,
 that's not what I see.  I'm using the CAS Workflow Manager Monitor Web App
 to view what's going on.  It continues to show the partially completed
 workflow after multiple restarts of the Workflow Manager.  So the repo is
 not getting wiped out, but that old workflows don't seem to be accessible.
  If I try to pause/resume/stop this workflow from the command line, I get
 the following message:

 WARNING: WorkflowEngine: Attempt to resume workflow instance id:
 119a9ebd-625a-11e1-b564-**9bb1991d21af, however, this engine is not
 tracking its execution

 So it's still in the repo, but the current manager isn't tracking it, so
 can't modify it.

 The use case I'm concerned about is losing partially completed workflows
 if the server crashes or is rebooted.  There are other ways to protect
 against this, but it would be nice if the Workflow Manager would simply
 start back up where it left off.

 Thanks,
 Keith




 Sheryl John wrote:

 Hi Keith,

 First of all welcome to the OODT world!

 If you restart your Workflow Manager a new repo is created and all
 previous
 workflow instances are wiped out and so, the engine does not track
 previous
 tasks in your old workflow instance.
 You can check this using the command line option : --getWorkflowInsts or
 try ./wmgr-client --help for other cmd-lne options. If you're using
 0.4-SNAPSHOT you'll see the latest command line menu.

 If you haven't restarted you wmgr you can start, pause and resume your
 workflow instance ( and this should complete the tasks in that workflow)
 with the cmd-line options.
 I usually restart the wmgr after I've modified policy files or changed
 environment variables/locations.

 Hope this helps!

 Sheryl

 On Tue, Feb 28, 2012 at 8:37 AM, Keith Cummings kcumm...@nrao.edu
 wrote:



 Hello,
 I'm at NRAO and we are considering using OODT in the near future and I
 just started playing around with it, specifically the the Workflow
 Manager.

 I was wondering if the Workflow Manager is able to restart workflows that
 are partially complete.  For example, I started a workflow that has four
 tasks.  During the processing of the second task, I stopped the Workflow
 Manager, then restarted it.  The Workflow Manager does NOT continue
 processing this task/workflow.  Is there a way to have it pick up where
 it
 left off?

 Thanks,
 Keith Cummings
 NRAO
 Socorro, NM











-- 
-Sheryl


Re: Issue with query_tool when all product types are not populated

2012-02-23 Thread Sheryl John
Tried single,double and no quotes. In all cases SELECT * did not work. But
it works if I specify the element name or the product-type.

On Thu, Feb 23, 2012 at 11:47 AM, Brian Foster holeno...@me.com wrote:

 Hey Sean,

 Maybe try using single quotes instead of double... i.e. ... -query 'SELECT
 * FROM *'... your shell is prob expanding your *'s

 -Brian

 Hardman, Sean H (388J) sean.h.hard...@jpl.nasa.gov wrote:

 Hey Chris,
 
 I would love to prove that theory correct but I am not using variables to
 specify my policy directories. I am using absolute paths. Your reply did
 prompt me to run through the tests again and I did discover something in
 the filemgr-client script. In April of last year, I created an issue [1]
 regarding the query_tool script and its lack of support for quoted
 parameters. The resolution was to replace $* in the script with $@. Now
 that the filemgr-client script needs to support the same arguments that
 the query_tool script supports, I am thinking it needs the same patch as
 well. I made that change locally and although the query still fails as
 documented in [2], the error below that you reference no longer is
 generated by the server. I am not entirely clear on the connection between
 this patch and the original error (maybe you or Brian could enlighten me),
 but it does appear worthy of a new JIRA issue.
 
 Sean
 
 [1] https://issues.apache.org/jira/browse/OODT-185
 [2] https://issues.apache.org/jira/browse/OODT-384
 
 On 2/23/12 6:21 AM, Mattmann, Chris A (388J)
 chris.a.mattm...@jpl.nasa.gov wrote:
 
 Hey Sean,
 
 Thanks, the below helped, as I think I know now what your problem is.
 Fast forward to here:
 
 On Feb 21, 2012, at 11:01 AM, Hardman, Sean H (388J) wrote:
 
  And generates the following on the server side:
 
  Feb 21, 2012 9:32:29 AM
  org.apache.oodt.cas.filemgr.repository.XMLRepositoryManager
  getProductTypeByName
  WARNING: XMLRepositoryManager: Unable to find product type:
 [convert_map
  filemgr filemgr-client migrate_xml_policy query_tool], returning null
  java.lang.NullPointerException
 
 This looks like an incorrect ENV var issue, where you assumed in your
 product type policy that a particular environment variable was present
 when you started file manager, but in the end, it wasn't and you
 defaulted
 to the current directory (which looks like bin) where you started file
 manager
 from as your policy directory.
 
 Can you please confirm that your product type repository path for these
 three references an environment variable that is actually present when
 you
 started the file manager? One way to do this is to simply make sure it's
 there
 explicitly in the session (via export or setenv) and then ./filemgr
 restart.
 
 HTH,
 Chris
 
 
  On 2/20/12 11:45 PM, Brian Foster holeno...@me.com wrote:
 
  Hey Sean,
 
  Try using the new SqlQuery action in 0.4 filemgr
 
  -Brian
 
  On Feb 20, 2012, at 6:30 PM, Hardman, Sean H (388J)
  sean.h.hard...@jpl.nasa.gov wrote:
 
  I first noticed this behavior in release 0.3 and just reaffirmed it
 in
  a latest and greatest build of 0.4-SNAPSHOT. To the best of my
  knowledge, this was not the case in previous versions. When querying
 a
  File Manager instance with a Lucene Catalog on the back end, the
  query_tool will throw an exception unless there is a product ingested
  for each product type listed in the policy (see the stack trace
 below).
 
  I assume I am not the first person to notice this behavior. Is this
  worthy of a JIRA issue?
 
  Thanks,
  Sean
 
 
  bash-3.2$ ./query_tool --url ${FILEMGR_URL} --sql -query SELECT *
 FROM
  *
  Feb 20, 2012 5:52:26 PM
  org.apache.oodt.cas.filemgr.catalog.LuceneCatalog paginateQuery
  WARNING: Query: [q=] for Product Type: [urn:pds:CatalogObject]
 returned
  no results
  java.lang.NullPointerException
  at
 

 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.complexQuery(XmlRp
 cF
  ileManager.java:602)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja
 va
  :39)
  at
 

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso
 rI
  mpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
  at org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
  at org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
  at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:139)
  at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:125)
  at org.apache.xmlrpc.WebServer$Connection.run(WebServer.java:761)
  at org.apache.xmlrpc.WebServer$Runner.run(WebServer.java:642)
  at java.lang.Thread.run(Thread.java:680)
  org.apache.xmlrpc.XmlRpcException: java.lang.Exception:
  org.apache.oodt.cas.filemgr.structs.exceptions.CatalogException:
 Failed
  to perform complex query : null
  at
 

 org.apache.xmlrpc.XmlRpcClientResponseProcessor.decodeException(XmlRpcC
 li
  

Re: Issue with query_tool when all product types are not populated

2012-02-22 Thread Sheryl John
Hi,

I've experienced the same today while working with 0.4-snapshot (not latest
build ) and also the query does not seem to accept '*' for the elements.
For the following use case:

./filemgr-client --url http://localhost:9000 -op -sql -query SELECT * FROM
CernerLocations

The stack trace:

Feb 22, 2012 2:36:52 PM
org.apache.oodt.cas.filemgr.system.XmlRpcFileManager complexQuery
INFO: Query returned 2 results
org.apache.xmlrpc.XmlRpcException: org.apache.xmlrpc.XmlRpcException: null
values not supported by XML-RPC
at
org.apache.xmlrpc.XmlRpcClientResponseProcessor.decodeException(XmlRpcClientResponseProcessor.java:104)
at
org.apache.xmlrpc.XmlRpcClientResponseProcessor.decodeResponse(XmlRpcClientResponseProcessor.java:71)
at org.apache.xmlrpc.XmlRpcClientWorker.execute(XmlRpcClientWorker.java:73)
at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:194)
at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:185)
at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:178)
at
org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient.complexQuery(XmlRpcFileManagerClient.java:974)
at
org.apache.oodt.cas.filemgr.cli.action.AbstractQueryCliAction.execute(AbstractQueryCliAction.java:75)
at org.apache.oodt.cas.cli.CmdLineUtility.execute(CmdLineUtility.java:296)
at org.apache.oodt.cas.cli.CmdLineUtility.run(CmdLineUtility.java:179)
at
org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient.main(XmlRpcFileManagerClient.java:1307)
ERROR: Failed to perform sql query : sortBy 'null', outputFormat 'null',
and delimiter '
', filterAlgor 'null', startDateTimeMetKey 'null', endDateTimeMetKey
'null', priorityMetKey 'null', null' : org.apache.xmlrpc.XmlRpcException:
null values not supported by XML-RPC


But if add the elements' names in the query I get the desired query results.

./filemgr-client --url http://palmer:9000 -op -sql -query SELECT
CAS.ProductName,PID FROM CernerLocations

Feb 22, 2012 2:45:22 PM
org.apache.oodt.cas.filemgr.system.XmlRpcFileManager complexQuery
INFO: Query returned 2 results
blah_xx.csv,
blah_yy.csv,




On Tue, Feb 21, 2012 at 1:38 PM, Mattmann, Chris A (388J) 
chris.a.mattm...@jpl.nasa.gov wrote:

 Hey Sean,

 Thanks for the FYI. OK, so it's not a regression, but it sounds worthy of
 a JIRA
 issue. I'd say go for it and let's investigate the fix, even if it means
 porting it
 out of wengine.

 Cheers,
 Chris

 On Feb 21, 2012, at 10:23 AM, Hardman, Sean H (388J) wrote:

  Hey Chris,
 
  Yes, the identical behavior can be seen 0.3, I am just slow to complain
  about it. I haven't tried 0.2, but this query does work successfully in
  the wengine-branch version of the FM. For clarification, the --lucene
  query option [1] and --sql query option [2] where I limit the query to
 the
  product type that I ingested with, work fine.
 
  Thanks,
  Sean
 
  [1] query_tool --url ${FILEMGR_URL} --lucene -query ProductType:
  CatalogFile
  [2] query tool --url ${FILEMGR_URL} --sql -query SELECT * FROM
  CatalogFile
 
  On 2/20/12 11:28 PM, Mattmann, Chris A (388J)
  chris.a.mattm...@jpl.nasa.gov wrote:
 
  Hey Sean,
 
  Interesting. Can you try with 0.3 and see if it gives you a different
  behavior? If it's a regression I'm happy to file an issue
  and/or create a unit test for it.
 
  Cheers,
  Chris
 
  On Feb 20, 2012, at 6:30 PM, Hardman, Sean H (388J) wrote:
 
  I first noticed this behavior in release 0.3 and just reaffirmed it in
  a latest and greatest build of 0.4-SNAPSHOT. To the best of my
  knowledge, this was not the case in previous versions. When querying a
  File Manager instance with a Lucene Catalog on the back end, the
  query_tool will throw an exception unless there is a product ingested
  for each product type listed in the policy (see the stack trace below).
 
  I assume I am not the first person to notice this behavior. Is this
  worthy of a JIRA issue?
 
  Thanks,
  Sean
 
 
  bash-3.2$ ./query_tool --url ${FILEMGR_URL} --sql -query SELECT * FROM
  *
  Feb 20, 2012 5:52:26 PM
  org.apache.oodt.cas.filemgr.catalog.LuceneCatalog paginateQuery
  WARNING: Query: [q=] for Product Type: [urn:pds:CatalogObject] returned
  no results
  java.lang.NullPointerException
  at
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.complexQuery(XmlRpcF
  ileManager.java:602)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java
  :39)
  at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI
  mpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
  at org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
  at org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
  at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:139)
  at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:125)
  at org.apache.xmlrpc.WebServer$Connection.run(WebServer.java:761)
 

Re: Web-Grid

2012-02-02 Thread Sheryl John
Hi BW,

Can we see the class diagram that you're referring to?

Thanks,
Sheryl

On Thu, Feb 2, 2012 at 12:26 PM, BW b.du...@gmail.com wrote:



 BW

 On Feb 2, 2012, at 11:28 AM, Mattmann, Chris A (388J) 
 chris.a.mattm...@jpl.nasa.gov wrote:

  Hi BW,
 
  On Feb 1, 2012, at 9:20 PM, B W wrote:
 
  Hi Chris.
 
  I was looking at the Product and Profile packages where the
 QueryHandler is
  located.  So, the deployed web-grid is decoupled from the Prod/Prof
  packages?
 
  Yep it sure it. Profile/Product handlers exist here:
 
  http://svn.apache.org/repos/asf/oodt/runk/profile
  http://svn.apache.org/repos/asf/oodt/trunk/product
 
  and Web-Grid exists here:
 
  http://svn.apache.org/repos/asf/oodt/trunk/grid
 
 
  But, the packages are used by an adopting organization to architect the
  definition and integration of profile and product components and the
  associated metadata?
 
  Yep they typically simply depend on the jar files produced as part of
  an independent, Maven-based module build.
 
 
  I'm just trying to evaluate and document the Web-Grid architectural
 aspect
  and I'm not sure if I need to include those 2 packages.  Trying to keep
 it
  constrained given my schedule.
 
  NP, as it stands, web-grid must depend on those 2 packages, to at least
  define the interfaces (profile and product) for its subsequent
 ProfileHandlers
  and QueryHandlers that it exposes.

 ok. Thx.  Is it sufficient enough to only show the relation as I did in
 the class diagram or should I further expand and document?
  HTH,
  Chris
 
 
  On Wed, Feb 1, 2012 at 6:43 PM, Mattmann, Chris A (388J) 
  chris.a.mattm...@jpl.nasa.gov wrote:
 
  Hey BW,
 
  At one point it was, yes, in the current Apache implementation, we
 opted
  for more interactivity and it serves as simply the REST-endpoint and
  marshaling
  of query and results and/or profiles back to the user.
 
  Cheers,
  Chris
 
  On Feb 1, 2012, at 8:37 AM, BW wrote:
 
  Is the QueryServlet responsible for aggregating the query results?
 
  BW
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
 
  ++
  Chris Mattmann, Ph.D.
  Senior Computer Scientist
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 171-266B, Mailstop: 171-246
  Email: chris.a.mattm...@nasa.gov
  WWW:   http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Assistant Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 




-- 
-Sheryl


Removing Product Types from Repository Manager

2011-04-02 Thread Sheryl John
Hi,

I am curious about the removeProductType() method in the following class :
https://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/repository/DataSourceRepositoryManager.java

https://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/repository/DataSourceRepositoryManager.javaSince
each Product is a member of a single Product Type, don't we have to delete
all the Products of the deleted Product Types too (i..e reference and
metadata table.) ?

I read the TODO comment and I am just curious as to why the deletion was not
done here or rather, the reason behind the assumption in the comment.


Thanks,
-- 
Sheryl


Re: Removing Product Types from Repository Manager

2011-04-02 Thread Sheryl John
Oh Ok. Got it.

It makes sense now. I thought the ProductType ID was a key constraint for
Products table too. I checked the SQL files and realized that disabling or
removing the Product Types wouldn't create a problem here.

Thanks!

On Sat, Apr 2, 2011 at 6:38 PM, Mattmann, Chris A (388J) 
chris.a.mattm...@jpl.nasa.gov wrote:

 Hi Sheryl,

 I think the thinking there was that a user may in fact use
 removeProductType to simply disable a product type, but may later
 somehow want to undo that.

 Ideally the function should have an option as to whether or not to actual
 remove the met and refs tables, but that's not there, yet... ^_^ Patches
 welcome!

 Cheers,
 Chris

 On Apr 2, 2011, at 4:26 PM, Sheryl John wrote:

  Hi,
 
  I am curious about the removeProductType() method in the following class
 :
 
 https://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/repository/DataSourceRepositoryManager.java
 
  
 https://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/repository/DataSourceRepositoryManager.java
 Since
  each Product is a member of a single Product Type, don't we have to
 delete
  all the Products of the deleted Product Types too (i..e reference and
  metadata table.) ?
 
  I read the TODO comment and I am just curious as to why the deletion was
 not
  done here or rather, the reason behind the assumption in the comment.
 
 
  Thanks,
  --
  Sheryl


 ++
 Chris Mattmann, Ph.D.
 Senior Computer Scientist
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 171-266B, Mailstop: 171-246
 Email: chris.a.mattm...@nasa.gov
 WWW:   http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Assistant Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++




-- 
Sheryl


Precondition Comparator in Crawler Framework

2011-03-19 Thread Sheryl John
Hi,


Under the Comparator package in the OODT CAS Crawler Framework, there is one
class FilemgrUniqunessCheckComparator( with method performCheck) and though
this has to be satisfied before metadata extraction and ingestion, I don't
see any class dependent on this class or calling the method performCheck.

However, under the package there is another class FilemgrUniquenessChecker
whose method performAction is called in CrawlerAction.

Have I missed out any class where the FilemgrUniqunessCheckComparator is
dependent on?

-Sheryl


Re: Using CDA to analyze OODT

2011-03-17 Thread Sheryl John
Thanks Kritayya!

I'm able to that for the file manager workset.
Have you checked for all oodt component worksets ?

2011/3/17 krittaya chunhaviriyakul krittaya_c...@hotmail.com


 Hi Sheryl,
 Thank you for your reply. From the example in the webpage, after you open
 the CDA.ws in cda tool, you can choose a class under a package, and then
 choose the 'dependencies' tab next to 'overview'.Here, right click on a
 package, and choose 'Track dependency to ...  ..  ', the dependencies
 graph will be displayed.But it depends on classes though... Try some of them
 out.So far, I haven't got anything like that for oodt yet.
 Hope that help!
 Krittaya
  Date: Thu, 17 Mar 2011 08:18:18 -0700
  Subject: Re: Using CDA to analyze OODT
  From: shery...@gmail.com
  To: dev@oodt.apache.org
 
  Hi,
 
  And, I have not yet figured out how to get the dependency graph from one
  class to another like how its shown on the
  http://www.dependency-analyzer.org/ site.
 
  On Thu, Mar 17, 2011 at 8:01 AM, Sheryl John shery...@gmail.com wrote:
 
   Hi Tom and Krittaya,
  
   Are you both looking for the dependency path graphs?
   If yes, do the following:
   Select and right click the respective classes. You will then get a drop
   down and select Show dependency graph for selected element.
  
  
  
   2011/3/17 krittaya chunhaviriyakul krittaya_c...@hotmail.com
  
  
   Hi Tom,
  
   I still haven't figured out how to get dependency paths between
 classes.
   But I can get all the dependencies that effect on one class by
 choosing
   track the dependencies...
  
   Cheers,
   Krittaya
  
  
  
   Date: Thu, 17 Mar 2011 01:06:40 -0700
   Subject: Using CDA to analyze OODT
   From: tomamund...@gmail.com
   To: dev@oodt.apache.org
  
   Hi All,
  
  
   Has anyone figured out how to use CDA to get dependency paths from one
   class to another? I'm attaching an image of what I'm talking about,
 which is
   shown at www.dependency-analyzer.org. Perhaps I am just incompetent,
 but
   I cannot find this functionality through any menu or mouse
 interactions.
  
  
   Thanks,
   Tom
  
  
  
  
   --
   Sheryl
  
 
 
 
  --
  Sheryl





-- 
Sheryl