RE: (to Michael Starch) Problems Using Serializable Metadata after Loading Validation Layer

2015-08-31 Thread Michael Starch
Valerie,

My issue was actually using "System.setProperties(Properties properties)"
in my code as this clobbers all existing system properties in your current
JVM.  The solution was to set each specific property I needed using
"System. setPropertie(String name, String value)" as this doesn't clobber
other values.

If you are not calling these things programmaticly, the it is a different
issue. I will happily look at any errors If you supply them but a quick
note: FILEMGR_HOME may not be set is this environment variable defined in
your?

Michael
On Sep 1, 2015 4:23 AM, "Mallder, Valerie" <valerie.mall...@jhuapl.edu>
wrote:

> Hi Michael,
>
> Could you explain your fix to this issue with more detail?  I am having
> the same problem. The default filemgr.properties file sets these two values
> to directories that do not exist (so I have to set them to 'something'
> valid). Here are the default settings:
>
> # XML repository manager configuration
> org.apache.oodt.cas.filemgr.repositorymgr.dirs=file:///dir1,file:///dir2
>
> # XML validation layer configuration
> org.apache.oodt.cas.filemgr.validation.dirs=file:///dir1,file:///dir2
>
>
> And this is what I set them to:
>
> # XML repository manager configuration
>
> org.apache.oodt.cas.filemgr.repositorymgr.dirs=file://[FILEMGR_HOME]/policy/core
>
> # XML validation layer configuration
>
> org.apache.oodt.cas.filemgr.validation.dirs=file://[FILEMGR_HOME]/policy/core
>
> And I still get the error.  Could you explain more about how I can work
> around this issue?
>
> Thanks,
> Valerie
>
>
> > -Original Message-
> > From: mdsta...@gmail.com [mailto:mdsta...@gmail.com] On Behalf Of
> Michael
> > Starch
> > Sent: Friday, July 31, 2015 12:20 PM
> > To: dev@oodt.apache.org
> > Subject: Re: Problems Using Serializable Metadata after Loading
> Validation Layer
> >
> > All,
> >
> > I found the issue.
> >
> > Using "System.setProperties()" and filling it from properties read from
> > filemanager.properties clears out other properties setup by the system
> which was
> > needed in the XML calls for SerializableMetadata. I did the above call
> to setup
> > properties needed by the XMLValidationLayer
> >
> > To fix set only the properties you need individually.  This adds to the
> System
> > properties, not erasing them.
> >
> > System.setProperty("org.apache.oodt.cas.filemgr.repositorymgr.dirs",
> ...);
> >
> > System.setProperty("org.apache.oodt.cas.filemgr.validation.dirs", ...);
> >
> > -Michael
> >
> >
> > On Thu, Jul 30, 2015 at 4:20 PM, Michael Starch <starc...@umich.edu>
> wrote:
> >
> > > Here is the stack trace, but this only happens after a completely
> > > unrelated peice of the process load the XML Validation Layer.
> > >
> > > -Michael
> > >
> > > java.lang.NullPointerException
> > > at
> > >
> com.sun.org.apache.xml.internal.serializer.ToStream.(ToStream.java:143)
> > > at
> > >
> >
> com.sun.org.apache.xml.internal.serializer.ToXMLStream.(ToXMLStream.java:
> > 67)
> > > at
> > >
> >
> com.sun.org.apache.xml.internal.serializer.ToUnknownStream.(ToUnknownStr
> > eam.java:143)
> > > at
> > >
> >
> com.sun.org.apache.xalan.internal.xsltc.runtime.output.TransletOutputHandlerFacto
> > ry.getSerializationHandler(TransletOutputHandlerFactory.java:160)
> > > at
> > >
> >
> com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(Tra
> > nsformerImpl.java:461)
> > > at
> > >
> >
> com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(Transforme
> > rImpl.java:344)
> > > at
> > > org.apache.oodt.cas.metadata.SerializableMetadata.writeMetadataToXmlSt
> > > ream(SerializableMetadata.java:157)
> > >
> > >
> > > On Thu, Jul 30, 2015 at 4:16 PM, Chris Mattmann
> > > <chris.mattm...@gmail.com>
> > > wrote:
> > >
> > >> Mike can you give some specific line numbers? I can help look
> > >>
> > >> —
> > >> Chris Mattmann
> > >> chris.mattm...@gmail.com
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> -Original Message-
> > >> From: <mdsta...@gmail.com> on behalf of Michael Starch <
> > >> starc...@umich.edu>
> > >> Reply-To: <dev@oodt.apache.org>
> > >> Date: Thursday, July 30, 2015 at 4:03 PM
> > >> To: <dev@oodt.apache.org>
> > >> Subject: Problems Using Serializable Metadata after Loading
> > >> Validation Layer
> > >>
> > >> >All,
> > >> >
> > >> >I am getting a NullPointerException deep in the XML library if I try
> > >> >to use the SerializableMetadata's write to xml function after I load
> > >> >in the XML Validation Layer from the filemanager. However, if I
> > >> >remove the call to load in the XML Validation Layer, everything
> > >> >works fine.  Any ideas as to what might cause this issue?
> > >> >
> > >> >Thanks,
> > >> >
> > >> >Michael
> > >>
> > >>
> > >>
> > >
>


Re: Organising the Poms

2015-08-06 Thread Michael Starch
+1
Paul R's recommendation is cleaner.  This is more likely to be adopted and
used.

I do have one concern.  We banished the streaming components to their own
submodule to keep its dependencies from mucking with the build.  Will
pulling these back to top level reintroduce this issue?

Also, we should get in the habit of upgrading top level versions, as
overriding in a child leads to multiple versions of the jar in the
classpath in some situations. Thus random runtime exceptions can occur.

Michael
Nice +1

On Thursday, August 6, 2015, Ramirez, Paul M (398M) 
paul.m.rami...@jpl.nasa.gov wrote:

 I see what you're saying. Below is an example of what I'm saying. You have
 them as a dependency in the master pom now versus just a set of properties
 in the master pom. Both would accomplish the same thing. I agree mvn
 dependency:tree should not be affect on the child pom area but would list
 all those dependencies under the master too.


 master pom:

 properties

 org.mockito.mockito-all.version1.9.5/org.mockito.mockito-all.version
 /properties



 child pom:

 dependency
groupIdorg.mockito/groupId
artifactIdmockito-all/artifactId
version${org.mockito.mockito-all.version}/version
scopetest/scope
  /dependency

 Personal preference is this but since you created the patch already for
 the other version I'm +1 on it unless the above convinces you it's better.


 --Paul

 
 Paul Ramirez, M.S.
 Technical Group Supervisor
 Computer Science for Data Intensive Applications (398M)
 Instrument Software and Science Data Systems Section (398)
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 158-264, Mailstop: 158-242
 Email: paul.m.rami...@jpl.nasa.gov javascript:;mailto:
 paul.m.rami...@jpl.nasa.gov javascript:;
 Office: 818-354-1015
 Cell: 818-395-8194
 

 On Aug 6, 2015, at 7:48 AM, Tom Barber tom.bar...@meteorite.bi
 javascript:;mailto:tom.bar...@meteorite.bi javascript:;
  wrote:

 Hi Paul

 I'm not at my desk so I can't check dependency:tree but I wouldn't expect
a
 different output.

 You also shouldn't loose track of module dependency requirements the
 dependency is still listed in the child pom it's just missing it's version
 attribute. Parameterization seems like a lot of overkill and maintenance
 that would get ignored pretty quickly and gains you little.

 Tom
 On 6 Aug 2015 14:42, Ramirez, Paul M (398M) paul.m.rami...@jpl.nasa.gov
 javascript:;mailto:paul.m.rami...@jpl.nasa.gov javascript:;
 wrote:

 Tom,

 An alternate approach would be to leave the dependencies as is but manage
 the versions as properties in the top level pom. With this patch we lose
 traceability of what dependencies are required where. This alternate
 approach would make overrides easier for people too as it would stand as a
 placeholder for folks to substitute out a property reference with a
 version.

 With this we lose the utility of mvn dependency:tree

 I'd align the property name with the fully qualified artifact name that
 way there was a clear mapping. I think this would accomplish what you were
 looking to do.

 Thoughts?

 --Paul

 ~~
 Paul Ramirez - Group Supervisor
 Computer Science for Data Intensive Applications
 Jet Propulsion Laboratory
 4800 Oak Grove Dr.
 Pasadena, CA 91109
 Office: 818-354-1015
 Cell: 818-395-8194
 ~~

 Sent from my iPhone

 On Aug 6, 2015, at 5:18 AM, Tom Barber tom.bar...@meteorite.bi
 javascript:;mailto:tom.bar...@meteorite.bi javascript:; wrote:

 Hello folks,

 I sent a pull request last night but its also worth discussing on here.

 When me an StarchMD were having a chat in Austin, we wanted to sort out
 some of the build process and locations.

 Personally one of my issues when using OODT is the sheer amount of
 dependencies. Clearly most of these are required, but keeping track of
 the
 versions across modules is a pain. The pull request you see here:
 https://github.com/apache/oodt/pull/25 addresses that by moving the
 versions from the sub modules up to OODT Core so when a version is
 changed
 it is changed in all the submodules. This removes a lot of the
 duplication
 and I believe it makes it easier to see which version is being used.

 If there is a requirement to override a specific version of a dependency
 in
 a submodule this can still be done, but it would also be nicer, in my
 opinion, just upgrade the main dependency so that all modules rely on the
 same version which makes integration a whole lot easier.

 Let me know your thoughts.

 Thanks

 Tom





Re: Problems Using Serializable Metadata after Loading Validation Layer

2015-07-31 Thread Michael Starch
All,

I found the issue.

Using System.setProperties() and filling it from properties read from
filemanager.properties clears out other properties setup by the system
which was needed in the XML calls for SerializableMetadata. I did the above
call to setup properties needed by the XMLValidationLayer

To fix set only the properties you need individually.  This adds to the
System properties, not erasing them.

System.setProperty(org.apache.oodt.cas.filemgr.repositorymgr.dirs, ...);

System.setProperty(org.apache.oodt.cas.filemgr.validation.dirs, ...);

-Michael


On Thu, Jul 30, 2015 at 4:20 PM, Michael Starch starc...@umich.edu wrote:

 Here is the stack trace, but this only happens after a completely
 unrelated peice of the process load the XML Validation Layer.

 -Michael

 java.lang.NullPointerException
 at
 com.sun.org.apache.xml.internal.serializer.ToStream.init(ToStream.java:143)
 at
 com.sun.org.apache.xml.internal.serializer.ToXMLStream.init(ToXMLStream.java:67)
 at
 com.sun.org.apache.xml.internal.serializer.ToUnknownStream.init(ToUnknownStream.java:143)
 at
 com.sun.org.apache.xalan.internal.xsltc.runtime.output.TransletOutputHandlerFactory.getSerializationHandler(TransletOutputHandlerFactory.java:160)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:461)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:344)
 at
 org.apache.oodt.cas.metadata.SerializableMetadata.writeMetadataToXmlStream(SerializableMetadata.java:157)


 On Thu, Jul 30, 2015 at 4:16 PM, Chris Mattmann chris.mattm...@gmail.com
 wrote:

 Mike can you give some specific line numbers? I can help look

 —
 Chris Mattmann
 chris.mattm...@gmail.com






 -Original Message-
 From: mdsta...@gmail.com on behalf of Michael Starch 
 starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Thursday, July 30, 2015 at 4:03 PM
 To: dev@oodt.apache.org
 Subject: Problems Using Serializable Metadata after Loading Validation
 Layer

 All,
 
 I am getting a NullPointerException deep in the XML library if I try to
 use
 the SerializableMetadata's write to xml function after I load in the XML
 Validation Layer from the filemanager. However, if I remove the call to
 load in the XML Validation Layer, everything works fine.  Any ideas as to
 what might cause this issue?
 
 Thanks,
 
 Michael






Re: Problems Using Serializable Metadata after Loading Validation Layer

2015-07-30 Thread Michael Starch
Here is the stack trace, but this only happens after a completely unrelated
peice of the process load the XML Validation Layer.

-Michael

java.lang.NullPointerException
at
com.sun.org.apache.xml.internal.serializer.ToStream.init(ToStream.java:143)
at
com.sun.org.apache.xml.internal.serializer.ToXMLStream.init(ToXMLStream.java:67)
at
com.sun.org.apache.xml.internal.serializer.ToUnknownStream.init(ToUnknownStream.java:143)
at
com.sun.org.apache.xalan.internal.xsltc.runtime.output.TransletOutputHandlerFactory.getSerializationHandler(TransletOutputHandlerFactory.java:160)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:461)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:344)
at
org.apache.oodt.cas.metadata.SerializableMetadata.writeMetadataToXmlStream(SerializableMetadata.java:157)


On Thu, Jul 30, 2015 at 4:16 PM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 Mike can you give some specific line numbers? I can help look

 —
 Chris Mattmann
 chris.mattm...@gmail.com






 -Original Message-
 From: mdsta...@gmail.com on behalf of Michael Starch starc...@umich.edu
 
 Reply-To: dev@oodt.apache.org
 Date: Thursday, July 30, 2015 at 4:03 PM
 To: dev@oodt.apache.org
 Subject: Problems Using Serializable Metadata after Loading Validation
 Layer

 All,
 
 I am getting a NullPointerException deep in the XML library if I try to
 use
 the SerializableMetadata's write to xml function after I load in the XML
 Validation Layer from the filemanager. However, if I remove the call to
 load in the XML Validation Layer, everything works fine.  Any ideas as to
 what might cause this issue?
 
 Thanks,
 
 Michael





Problems Using Serializable Metadata after Loading Validation Layer

2015-07-30 Thread Michael Starch
All,

I am getting a NullPointerException deep in the XML library if I try to use
the SerializableMetadata's write to xml function after I load in the XML
Validation Layer from the filemanager. However, if I remove the call to
load in the XML Validation Layer, everything works fine.  Any ideas as to
what might cause this issue?

Thanks,

Michael


Re: OODT book proposal

2015-05-20 Thread Michael Starch
I am definitely up for this.  Are we going to push for an OODT-1.0
polished released before we write a book?  There seem to be a lot of very
nice features in the pipe, which will affect the content of a book.

Regardless, I believe the following may be helpful for the book.

- Setting Up OODT
- Major OODT Components
- Extending OODT
- OODT in Operations

-Michael

On Wed, May 20, 2015 at 9:24 AM, Freeborn, Dana J (398G) 
dana.j.freeb...@jpl.nasa.gov wrote:

 Hi Tom,

 This is great.  I would be happy to contribute to the book if my level of
 experience with OODT (which is high) is helpful.  It's not clear to me your
 focus for the book, so you'll just have to tap me if I can contribute.
 Paul and Chris will know if I do.  Otherwise, I will cheer on your
 efforts.  :-)

 Thanks,
 Dana


 From: Tom Barber tom.bar...@meteorite.bimailto:tom.bar...@meteorite.bi
 Reply-To: dev@oodt.apache.orgmailto:dev@oodt.apache.org 
 dev@oodt.apache.orgmailto:dev@oodt.apache.org
 Date: Wednesday, May 20, 2015 2:02 AM
 To: dev@oodt.apache.orgmailto:dev@oodt.apache.org dev@oodt.apache.org
 mailto:dev@oodt.apache.org
 Subject: OODT book proposal

 Hi guys,

 I was on the phone to Manning today about an OODT book that me, Chris and
 Mike Starch mulled over in Texas.

 I've been tasked with coming up with a TOC as the proposal so I'll knock
 something together and share it later in the week, but on this topic.

 a) Does anyone have anything they'd especially like to see in an OODT book?
 b) Does anyone want to help donate some hours writing a bit of the book
 (won't be any time soon and takes months so don't worry about it, and I
 wont hold you to it, just looking for interest)


 Tom




Talks For ApacheCon Europe

2015-04-29 Thread Michael Starch
All,

A few of us working on the streaming components of OODT will submit talks
to ApacheCon Europe.

Don't forget to submit your own talks, July 1st.

-Michael


Re: [GitHub] oodt pull request: Moved scala plugin from core pom to streaming p...

2015-04-28 Thread Michael Starch
Yeah I noticed.  Thanks!

On Tue, Apr 28, 2015 at 10:22 AM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 Done already! :)

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Tuesday, April 28, 2015 at 10:21 AM
 To: dev@oodt.apache.org
 Subject: Re: [GitHub] oodt pull request: Moved scala plugin from core pom
 to streaming p...

 All,
 
 I will check this in.
 
 -Michael
 
 On Tue, Apr 28, 2015 at 9:18 AM, r4space g...@git.apache.org wrote:
 
  GitHub user r4space opened a pull request:
 
  https://github.com/apache/oodt/pull/20
 
  Moved scala plugin from core pom to streaming pom only
 
 
 
  You can merge this pull request into a Git repository by running:
 
  $ git pull https://github.com/r4space/oodt trunk
 
  Alternatively you can review and apply these changes as the patch at:
 
  https://github.com/apache/oodt/pull/20.patch
 
  To close this pull request, make a commit to your master/trunk branch
  with (at least) the following in the commit message:
 
  This closes #20
 
  
  commit 01afbe89953ef621379f906bb4e186a5eccf1e8e
  Author: Jane Wyngaard wynga...@jpl.nasa.gov
  Date:   2015-04-28T16:14:45Z
 
  Moved scala plugin from core pom to streaming pom only
 
  
 
 
  ---
  If your project is set up for it, you can reply to this email and have
 your
  reply appear on GitHub as well. If your project does not have this
 feature
  enabled and wishes so, or if the feature is enabled but not working,
 please
  contact infrastructure at infrastruct...@apache.org or file a JIRA
 ticket
  with INFRA.
  ---
 





Re: [GitHub] oodt pull request: Moved scala plugin from core pom to streaming p...

2015-04-28 Thread Michael Starch
All,

I will check this in.

-Michael

On Tue, Apr 28, 2015 at 9:18 AM, r4space g...@git.apache.org wrote:

 GitHub user r4space opened a pull request:

 https://github.com/apache/oodt/pull/20

 Moved scala plugin from core pom to streaming pom only



 You can merge this pull request into a Git repository by running:

 $ git pull https://github.com/r4space/oodt trunk

 Alternatively you can review and apply these changes as the patch at:

 https://github.com/apache/oodt/pull/20.patch

 To close this pull request, make a commit to your master/trunk branch
 with (at least) the following in the commit message:

 This closes #20

 
 commit 01afbe89953ef621379f906bb4e186a5eccf1e8e
 Author: Jane Wyngaard wynga...@jpl.nasa.gov
 Date:   2015-04-28T16:14:45Z

 Moved scala plugin from core pom to streaming pom only

 


 ---
 If your project is set up for it, you can reply to this email and have your
 reply appear on GitHub as well. If your project does not have this feature
 enabled and wishes so, or if the feature is enabled but not working, please
 contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
 with INFRA.
 ---



Re: Scala Maven plugin in oodt-core?

2015-04-27 Thread Michael Starch
Currently, we should be able to put it in the streaming component only.

...that is assuming OODT developers don't want to switch to Scala

I will make a patch later today.

-Michael

On Sun, Apr 26, 2015 at 6:03 PM, Mattmann, Chris A (3980) 
chris.a.mattm...@jpl.nasa.gov wrote:

 Hey MikeS,

 Do we really need the Scala Maven plugin in OODT core? Or can we
 push it to the streaming resmgr now? I don’t think we need it in
 core, right?

 Cheers,
 Chris

 ++
 Chris Mattmann, Ph.D.
 Chief Architect
 Instrument Software and Science Data Systems Section (398)
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 168-519, Mailstop: 168-527
 Email: chris.a.mattm...@nasa.gov
 WWW:  http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Associate Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++







Re: commit error 403 in response MKACTIVITY request

2015-04-23 Thread Michael Starch
Valerie,

Is your SVN repo checked out at http or https. To find out use svn
info and look at the URL. If it is http, you need to update it to the
https repo.  Try using the following command (filling in the URL from svn
info):

   svn switch --relocate http:// https://

After this it should ask for a user name and password.

-Michael

On Thu, Apr 23, 2015 at 7:25 AM, Mallder, Valerie 
valerie.mall...@jhuapl.edu wrote:

 Hi Chris,

 I am trying to commit my patch for OODT-826 that adds an extern
 precondition class and I'm getting an error when I try to commit the
 files.  I was wondering if anyone has any insight into what's happening.
 Is there some authentication information that I need to set in one of the
 configuration files in the .subversion/ folder?  The error messages are
 below.

 Thanks,
 Val


 svn: Commit failed (details follow):
 svn: Server sent unexpected return value (403 Forbidden) in response to
 MKACTIVITY request for
 '/repos/asf/!svn/act/061500e6-9577-467b-a5be-fd3972636cdc'
 svn: Your commit message was left in a temporary file:
 svn:'/project/oodt/dev/oodt/oodt-trunk/svn-commit.2.tmp'
 slothrop[dev/oodt/oodt-trunk]




 Valerie A. Mallder

 New Horizons Deputy Mission System Engineer
 The Johns Hopkins University/Applied Physics Laboratory
 11100 Johns Hopkins Rd (MS 23-282), Laurel, MD 20723
 240-228-7846 (Office) 410-504-2233 (Blackberry)




Re: Schema Evolution within OODT CAS

2015-04-22 Thread Michael Starch
Lewis,

In my experience, if the underlying catalog allows schema evolution, so
does OODT.

Using the DataSourceCatalog meant our DBA could apply a schema update to
the database which was a form of schema evolution.

Does this answer your question?

Michael
 On Apr 22, 2015 7:40 AM, Lewis John Mcgibbney lewis.mcgibb...@gmail.com
wrote:

 hi Folks,
 Very simple question... Is there any built in support for schema evolution
 in OODT CAS?
 Say I have product MD and the fundamental structure of the MD changes (over
 and above arbitrary changes in MD values) do I need to re catalog all of my
 products?
 Right now I think the answer is yes but would be pleasantly surprised to
 hear otherwise.
 Thanks
 Lewis


 --
 *Lewis*



Re: Schema Evolution within OODT CAS

2015-04-22 Thread Michael Starch
There were proposals for a OODT Metadata update tool, which would fix this
problem. Anyone know what happened to this work?
On Apr 22, 2015 7:52 AM, Lewis John Mcgibbney lewis.mcgibb...@gmail.com
wrote:

 I get it... I've never used that backend however.
 For every Catalog extending the base Catalog interface it is implementation
 specific.
 Lucene catalog is Ok, Solr is not  Etc

 On Wednesday, April 22, 2015, Michael Starch starc...@umich.edu wrote:

  Lewis,
 
  In my experience, if the underlying catalog allows schema evolution, so
  does OODT.
 
  Using the DataSourceCatalog meant our DBA could apply a schema update to
  the database which was a form of schema evolution.
 
  Does this answer your question?
 
  Michael
   On Apr 22, 2015 7:40 AM, Lewis John Mcgibbney 
  lewis.mcgibb...@gmail.com javascript:;
  wrote:
 
   hi Folks,
   Very simple question... Is there any built in support for schema
  evolution
   in OODT CAS?
   Say I have product MD and the fundamental structure of the MD changes
  (over
   and above arbitrary changes in MD values) do I need to re catalog all
 of
  my
   products?
   Right now I think the answer is yes but would be pleasantly surprised
 to
   hear otherwise.
   Thanks
   Lewis
  
  
   --
   *Lewis*
  
 


 --
 *Lewis*



Re: FW: [jira] [Apache Infrastructure] Github Integration

2015-04-16 Thread Michael Starch
Should I re-make the pull request?

-Michael

On Thu, Apr 16, 2015 at 3:31 PM, Tom Barber tom.bar...@meteorite.bi wrote:

 Dunno where the trail is with this but infra told me it was done but
 starches pull request won't get picked up because it doesn't replay
 history.
 On 14 Apr 2015 15:49, Mattmann, Chris A (3980) 
 chris.a.mattm...@jpl.nasa.gov wrote:

 
 
  ++
  Chris Mattmann, Ph.D.
  Chief Architect
  Instrument Software and Science Data Systems Section (398)
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 168-519, Mailstop: 168-527
  Email: chris.a.mattm...@nasa.gov
  WWW:  http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Associate Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
 
 
 
  -Original Message-
  From: Chris A. Mattmann j...@apache.org
  Date: Tuesday, April 14, 2015 at 3:37 PM
  To: Chris Mattmann chris.a.mattm...@jpl.nasa.gov
  Subject: [jira] [Apache Infrastructure] Github Integration
 
  New request Github Integration with key INFRA-9441 has been created...
Apache Infrastructure
   - Github Integration
  https://issues.apache.org/jira/servicedesk/customer/portal/1
 Reference:
  INFRA-9441
  
 https://issues.apache.org/jira/servicedesk/customer/portal/1/INFRA-9441
  G
  ithub Integration
  
 https://issues.apache.org/jira/servicedesk/customer/portal/1/INFRA-9441
  Waiting for Infra
  You can
  view the full request
  
 https://issues.apache.org/jira/servicedesk/customer/portal/1/INFRA-9441
  Details
  Git Repository Nameoodt
  Github IntegrationEnable
  Git Notification Mailing list...@oodt.apache.org
  Github Integration - TriggersCreate, Issues, Pull Request, Pull Request
  Comment, and push, PushThis message is automatically generated by JIRA
  Service Desk.
  If you think it was sent incorrectly, please contact your JIRA
  administrators.
  For more information on JIRA Service Desk, see:
  http://www.atlassian.com/software/jira/service-desk
  http://www.atlassian.com/software/jira/service-desk
 
 



Assign Permissions in issues.apache.org

2015-04-16 Thread Michael Starch
Anyone know how I get permission to assign JIRA Issues to people, or is
this power not available to committers?

-Michael


OODT-837

2015-04-16 Thread Michael Starch
All,

Please look at: https://issues.apache.org/jira/browse/OODT-837 and comment
on if listed components are essential or non-essential to the base-OODT.

I was aggressive when writing the issue, so as a community we need to
establish a consensus.

-Michael


Python Standard

2015-04-15 Thread Michael Starch
Dev,

Have we standardized on a python version for OODT python code?

Specifically, do we use Python 3.x or Python 2.x? From a brief search
Agile OODT uses 2.x.  Is this the recommendation?

-Michael


Needed OODT Improvments

2015-04-15 Thread Michael Starch
Dev,

I've been talking with Tom and we've noted a list of things that should be
improved on OODT.

  - pom/maven:
  - fix dependency circles in radix
  - make extra components with large number of dependencies optional
(pending pull request)
  - move dependency versions to top-level pom making versions consistent
  - update dependency versions

  - replace xml-rpc development-only server
  - separate REST interfaces and client code for ops ui and cas-curator

  - cleanup
  - deprecate unused or superseded sub-components
  - spin-off non-essential functionality into sibling projects
  - documentation

Any thoughts?

-Michael


Re: Amazon S3 data transfer for filemanager

2015-03-13 Thread Michael Starch
John,

Did you remove the code at the top of the script that sets JAVA_EXT_JARS?
Are you running the oracle java still?

If both are true, can you send me the latest error?

-Michael



On Thu, Mar 12, 2015 at 5:14 PM, John Reynolds jreyno...@vpicu.net wrote:

 Thanks Michael, this works for me (filemanager starts), however i’m still
 getting the same datatransferer error

  On Mar 12, 2015, at 1:52 PM, Michael Starch starc...@umich.edu wrote:
 
  John,
 
  I should be more verbose.  The java classpath traditionally did not pick
 up
  multiple jars, so it was super labor intensive to setup.  These days you
  can use * inside the classpath to pick up multiple jarsbut it must be
  in   because otherwise the shell will glob the * if it is outside of
  quotes.
 
  If this doesn't work, try the other recommendations in:
 
 
 
 http://stackoverflow.com/questions/219585/setting-multiple-jars-in-java-classpath
 
  -Michael
 
 
  On Thu, Mar 12, 2015 at 1:48 PM, Michael Starch starc...@umich.edu
 wrote:
 
  John,
 
  Change:
 -classpath $FILEMGR_HOME/lib \
  To:
 -classpath $FILEMGR_HOME/lib/* \
 
  -Michael
 
 
  On Thu, Mar 12, 2015 at 1:37 PM, John Reynolds jreyno...@vpicu.net
  wrote:
 
  Thanks Michael, if i modify the filemgr-client to look like this (at
 the
  end)
  $_RUNJAVA $JAVA_OPTS $OODT_OPTS \
   -classpath $FILEMGR_HOME/lib \
 
 
 -Dorg.apache.oodt.cas.filemgr.properties=$FILEMGR_HOME/etc/filemgr.properties
  \
 
 -Djava.util.logging.config.file=$FILEMGR_HOME/etc/logging.properties \
 
 
 -Dorg.apache.oodt.cas.cli.action.spring.config=file:$FILEMGR_HOME/policy/cmd-line-actions.xml
  \
 
 
 -Dorg.apache.oodt.cas.cli.option.spring.config=file:$FILEMGR_HOME/policy/cmd-line-options.xml
  \
   org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient $@“
 
  (replacing ext jars with -classpath) then i get
  Error: Could not find or load main class
  org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient
  i assume i’m doing something wrong with classpath but not sure what
 
  On Mar 12, 2015, at 11:31 AM, Michael Starch starc...@umich.edu
  wrote:
 
  John,
 
  Can you open filemgr-client sh script?  It may set the JAVA_EXT_JARS
  there.  If so, it is clobbering the default path for extension jars,
  and
  your java encryption jars are not being picked up. If it does set
  JAVA_EXT_JARS you have two options:
 
  1. Move all your encryption jars into FILEMGR_HOME/lib/
  2. update filemgr-client script to us classpath to specify the jars in
  the
  FILEMGRHOME/lib directory and remove the use of JAVA_EXT_JARS
 
 
  -Michael
 
 
  On Thu, Mar 12, 2015 at 11:12 AM, John Reynolds jreyno...@vpicu.net
  wrote:
 
  Hi Michael
  yeah it’s openjdk 1.7 (“1.7.0_75)
  i did download the the unlimited encryption jar from oracle and
  replaced
  the local_policy / us_export_policy jars in javahome/jre/lib/security
  more i read, maybe limited by jce.jar
 
  i dont have anything special set for extension jars
 
 
 
  On Mar 12, 2015, at 10:35 AM, Michael Starch starc...@umich.edu
  wrote:
 
  John,
 
  What version of the JDK are you running, and what is your extension
  jars
  environment variable set to.  Do you have the java cryptology jar
  included
  (Oracle JDK usually has this, I don't know if Open JDK does).
 
  Algorithm HmacSHA1 not available is usually thrown when Java
 cannot
  find
  the java crypto jar used to calculate the given hash.
 
  -Michael
 
  On Thu, Mar 12, 2015 at 9:06 AM, John Reynolds jreyno...@vpicu.net
 
  wrote:
 
  Hi Lewis,
  using the latest docker buggtb/oodt image, which i assume is .8
  here’s the command i’m running to test the upload
 
  filemgr-client --url http://localhost:9000 --operation
  --ingestProduct
  --productName test --productStructure Flat --productTypeName
  GenericFile
  --metadataFile file:///root/test.txt.met --refs
 file:///root/test.txt
 
  i verified that i can upload to the path using the s3 tools on the
  box /
  with same credentials i put in the properties file
 
  here’s the full exception returned:
 
 
 rg.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
 
 org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt
   at
 
 
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:768)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at
 
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at
 
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
   at
  org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
   at
  org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
   at
  org.apache.xmlrpc.XmlRpcServer.execute

Re: Data processing pipeline workflow management

2015-03-12 Thread Michael Starch
Yes.  Our batch processing back-end for the Resource manager now can take
advantage of the Mesos Cluster manager.

This enables OODT to farm batch processing out to a mesos cluster.

-Michael

On Thu, Mar 12, 2015 at 7:58 AM, BW bw...@mysoftcloud.com wrote:

 Any thoughts on integrating a plug in service with Marathon first then
 layer Mesos on top?

 On Wednesday, March 11, 2015, Mattmann, Chris A (3980) 
 chris.a.mattm...@jpl.nasa.gov wrote:

  Apache OODT now has a workflow plugin that connects to Mesos:
 
  http://oodt.apache.org/
 
  Cross posting this to dev@oodt.apache.org javascript:; so people like
  Mike Starch can chime in.
 
  ++
  Chris Mattmann, Ph.D.
  Chief Architect
  Instrument Software and Science Data Systems Section (398)
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 168-519, Mailstop: 168-527
  Email: chris.a.mattm...@nasa.gov javascript:;
  WWW:  http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Associate Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
 
 
 
  -Original Message-
  From: Zameer Manji zma...@apache.org javascript:;
  Reply-To: d...@aurora.incubator.apache.org javascript:;
  d...@aurora.incubator.apache.org javascript:;
  Date: Wednesday, March 11, 2015 at 3:21 PM
  To: d...@aurora.incubator.apache.org javascript:; 
  d...@aurora.incubator.apache.org javascript:;
  Subject: Re: Data processing pipeline workflow management
 
  Hey,
  
  This is a great question. See my comments inline below.
  
  On Tue, Mar 10, 2015 at 8:28 AM, Lars Albertsson
  lars.alberts...@gmail.com javascript:;
  wrote:
  
   We are evaluating Aurora as a workflow management tool for batch
   processing pipelines. We basically need a tool that regularly runs
   batch processes that are connected as producers/consumers of data,
   typically stored in HDFS or S3.
  
   The alternative tools would be Azkaban, Luigi, and Oozie, but I am
   hoping that building something built on Aurora would result in a
   better solution.
  
   Does anyone have experience with building workflows with Aurora? How
   is Twitter handling batch pipelines? Would the approach below make
   sense, or are there better suggestions? Is there anything related to
   this in the roadmap or available inside Twitter only?
  
  
  As far as I know, you are the first person to consider Aurora for
 workflow
  management for batch processing. Currently Twitter does not use Aurora
 for
  batch pipelines.
  I'm not aware of the specifics of the design, but at Twitter there is an
  internal solution for pipelines built upon Hadoop/YARN.
  Currently Aurora is designed around being a service scheduler and I'm
 not
  aware of any future plans to support workflows or batch computation.
  
  
   In our case, the batch processes will be a mix of cluster
   computation's with Spark, and single-node computations. We want the
   latter to also be scheduled on a farm, and this is why we are
   attracted to Mesos. In the text below, I'll call each part of a
   pipeline a 'step', in order to avoid confusion with Aurora jobs and
   tasks.
  
   My unordered wishlist is:
   * Data pipelines consist of DAGs, where steps take one or more inputs,
   and generate one or more outputs.
  
   * Independent steps in the DAG execute in parallel, constrained by
   resources.
  
   * Steps can be written in different languages and frameworks, some
   clustered.
  
   * The developer code/test/debug cycle is quick, and all functional
   tests can execute on a laptop.
  
   * Developers can test integrated data pipelines, consisting of
   multiple steps, on laptops.
  
   * Steps and their intputs and outputs are parameterised, e.g. by date.
   A parameterised step is typically independent from other instances of
   the same step, e.g. join one day's impressions log with user
   demographics. In some cases, steps depend on yesterday's results, e.g.
   apply one day's user management operation log to the user dataset from
   the day before.
  
   * Data pipelines are specified in embedded DSL files (e.g. aurora
   files), kept close to the business logic code.
  
   * Batch steps should be started soon after the input files become
   available.
  
   * Steps should gracefully avoid recomputation when output files exist.
  
   * Backfilling a window back in time, e.g. 30 days, should happen
   automatically if some earlier steps have failed, or if output files
   have been deleted manually.
  
   * Continuous deployment in the sense that steps are automatically
   deployed and scheduled after 'git push'.
  
   * Step owners can get an overview of step status and history, and
   debug step execution, e.g. by accessing log files.
  
  
   I am aware that no framework will give us 

Re: Amazon S3 data transfer for filemanager

2015-03-12 Thread Michael Starch
John,

What version of the JDK are you running, and what is your extension jars
environment variable set to.  Do you have the java cryptology jar included
(Oracle JDK usually has this, I don't know if Open JDK does).

Algorithm HmacSHA1 not available is usually thrown when Java cannot find
the java crypto jar used to calculate the given hash.

-Michael

On Thu, Mar 12, 2015 at 9:06 AM, John Reynolds jreyno...@vpicu.net wrote:

 Hi Lewis,
 using the latest docker buggtb/oodt image, which i assume is .8
 here’s the command i’m running to test the upload

 filemgr-client --url http://localhost:9000 --operation --ingestProduct
 --productName test --productStructure Flat --productTypeName GenericFile
 --metadataFile file:///root/test.txt.met --refs file:///root/test.txt

 i verified that i can upload to the path using the s3 tools on the box /
 with same credentials i put in the properties file

 here’s the full exception returned:

 rg.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
 org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
 Failed to upload product reference /root/test.txt to S3 at
 usr/src/oodt/data/archive/test/test.txt
 at
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:768)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
 at
 org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
 at org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
 at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:139)
 at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:125)
 at org.apache.xmlrpc.WebServer$Connection.run(WebServer.java:761)
 at org.apache.xmlrpc.WebServer$Runner.run(WebServer.java:642)
 at java.lang.Thread.run(Thread.java:745)
 Caused by:
 org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
 Failed to upload product reference /root/test.txt to S3 at
 usr/src/oodt/data/archive/test/test.txt
 at
 org.apache.oodt.cas.filemgr.datatransfer.S3DataTransferer.transferProduct(S3DataTransferer.java:78)
 at
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:752)
 ... 12 more
 Caused by: com.amazonaws.AmazonClientException: Unable to calculate a
 request signature: Unable to calculate a request signature: Algorithm
 HmacSHA1 not available
 at
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:71)
 at
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:57)
 at
 com.amazonaws.services.s3.internal.S3Signer.sign(S3Signer.java:128)
 at
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:330)
 at
 com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
 at
 com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
 at
 com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1393)
 at
 org.apache.oodt.cas.filemgr.datatransfer.S3DataTransferer.transferProduct(S3DataTransferer.java:76)
 ... 13 more
 Caused by: com.amazonaws.AmazonClientException: Unable to calculate a
 request signature: Algorithm HmacSHA1 not available
 at
 com.amazonaws.auth.AbstractAWSSigner.sign(AbstractAWSSigner.java:90)
 at
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:68)
 ... 20 more
 Caused by: java.security.NoSuchAlgorithmException: Algorithm HmacSHA1 not
 available
 at javax.crypto.Mac.getInstance(Mac.java:176)
 at
 com.amazonaws.auth.AbstractAWSSigner.sign(AbstractAWSSigner.java:86)
 ... 21 more
 org.apache.xmlrpc.XmlRpcException: java.lang.Exception:
 org.apache.oodt.cas.filemgr.structs.exceptions.CatalogException: Error
 ingesting product [org.apache.oodt.cas.filemgr.structs.Product@6454bbe1]
 : org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
 Failed to upload product reference /root/test.txt to S3 at
 usr/src/oodt/data/archive/test/test.txt
 at
 org.apache.xmlrpc.XmlRpcClientResponseProcessor.decodeException(XmlRpcClientResponseProcessor.java:104)
 at
 org.apache.xmlrpc.XmlRpcClientResponseProcessor.decodeResponse(XmlRpcClientResponseProcessor.java:71)
 at
 org.apache.xmlrpc.XmlRpcClientWorker.execute(XmlRpcClientWorker.java:73)
 at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:194)
 at org.apache.xmlrpc.XmlRpcClient.execute(XmlRpcClient.java:185)
 at 

Re: Amazon S3 data transfer for filemanager

2015-03-12 Thread Michael Starch
John,

Can you open filemgr-client sh script?  It may set the JAVA_EXT_JARS
there.  If so, it is clobbering the default path for extension jars, and
your java encryption jars are not being picked up. If it does set
JAVA_EXT_JARS you have two options:

1. Move all your encryption jars into FILEMGR_HOME/lib/
2. update filemgr-client script to us classpath to specify the jars in the
FILEMGRHOME/lib directory and remove the use of JAVA_EXT_JARS


-Michael


On Thu, Mar 12, 2015 at 11:12 AM, John Reynolds jreyno...@vpicu.net wrote:

 Hi Michael
 yeah it’s openjdk 1.7 (“1.7.0_75)
 i did download the the unlimited encryption jar from oracle and replaced
 the local_policy / us_export_policy jars in javahome/jre/lib/security
 more i read, maybe limited by jce.jar

 i dont have anything special set for extension jars



  On Mar 12, 2015, at 10:35 AM, Michael Starch starc...@umich.edu wrote:
 
  John,
 
  What version of the JDK are you running, and what is your extension jars
  environment variable set to.  Do you have the java cryptology jar
 included
  (Oracle JDK usually has this, I don't know if Open JDK does).
 
  Algorithm HmacSHA1 not available is usually thrown when Java cannot
 find
  the java crypto jar used to calculate the given hash.
 
  -Michael
 
  On Thu, Mar 12, 2015 at 9:06 AM, John Reynolds jreyno...@vpicu.net
 wrote:
 
  Hi Lewis,
  using the latest docker buggtb/oodt image, which i assume is .8
  here’s the command i’m running to test the upload
 
  filemgr-client --url http://localhost:9000 --operation --ingestProduct
  --productName test --productStructure Flat --productTypeName GenericFile
  --metadataFile file:///root/test.txt.met --refs file:///root/test.txt
 
  i verified that i can upload to the path using the s3 tools on the box /
  with same credentials i put in the properties file
 
  here’s the full exception returned:
 
  rg.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt
 at
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:768)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
 at
  org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
 at org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
 at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:139)
 at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:125)
 at org.apache.xmlrpc.WebServer$Connection.run(WebServer.java:761)
 at org.apache.xmlrpc.WebServer$Runner.run(WebServer.java:642)
 at java.lang.Thread.run(Thread.java:745)
  Caused by:
  org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt
 at
 
 org.apache.oodt.cas.filemgr.datatransfer.S3DataTransferer.transferProduct(S3DataTransferer.java:78)
 at
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:752)
 ... 12 more
  Caused by: com.amazonaws.AmazonClientException: Unable to calculate a
  request signature: Unable to calculate a request signature: Algorithm
  HmacSHA1 not available
 at
 
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:71)
 at
 
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:57)
 at
  com.amazonaws.services.s3.internal.S3Signer.sign(S3Signer.java:128)
 at
 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:330)
 at
  com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
 at
 
 com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
 at
 
 com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1393)
 at
 
 org.apache.oodt.cas.filemgr.datatransfer.S3DataTransferer.transferProduct(S3DataTransferer.java:76)
 ... 13 more
  Caused by: com.amazonaws.AmazonClientException: Unable to calculate a
  request signature: Algorithm HmacSHA1 not available
 at
  com.amazonaws.auth.AbstractAWSSigner.sign(AbstractAWSSigner.java:90)
 at
 
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:68)
 ... 20 more
  Caused by: java.security.NoSuchAlgorithmException: Algorithm HmacSHA1
 not
  available
 at javax.crypto.Mac.getInstance(Mac.java:176

Re: Amazon S3 data transfer for filemanager

2015-03-12 Thread Michael Starch
John,

Change:
-classpath $FILEMGR_HOME/lib \
To:
-classpath $FILEMGR_HOME/lib/* \

-Michael

On Thu, Mar 12, 2015 at 1:37 PM, John Reynolds jreyno...@vpicu.net wrote:

 Thanks Michael, if i modify the filemgr-client to look like this (at the
 end)
 $_RUNJAVA $JAVA_OPTS $OODT_OPTS \
   -classpath $FILEMGR_HOME/lib \

 -Dorg.apache.oodt.cas.filemgr.properties=$FILEMGR_HOME/etc/filemgr.properties
 \
   -Djava.util.logging.config.file=$FILEMGR_HOME/etc/logging.properties \

 -Dorg.apache.oodt.cas.cli.action.spring.config=file:$FILEMGR_HOME/policy/cmd-line-actions.xml
 \

 -Dorg.apache.oodt.cas.cli.option.spring.config=file:$FILEMGR_HOME/policy/cmd-line-options.xml
 \
   org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient $@“

 (replacing ext jars with -classpath) then i get
 Error: Could not find or load main class
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient
 i assume i’m doing something wrong with classpath but not sure what

  On Mar 12, 2015, at 11:31 AM, Michael Starch starc...@umich.edu wrote:
 
  John,
 
  Can you open filemgr-client sh script?  It may set the JAVA_EXT_JARS
  there.  If so, it is clobbering the default path for extension jars,
 and
  your java encryption jars are not being picked up. If it does set
  JAVA_EXT_JARS you have two options:
 
  1. Move all your encryption jars into FILEMGR_HOME/lib/
  2. update filemgr-client script to us classpath to specify the jars in
 the
  FILEMGRHOME/lib directory and remove the use of JAVA_EXT_JARS
 
 
  -Michael
 
 
  On Thu, Mar 12, 2015 at 11:12 AM, John Reynolds jreyno...@vpicu.net
 wrote:
 
  Hi Michael
  yeah it’s openjdk 1.7 (“1.7.0_75)
  i did download the the unlimited encryption jar from oracle and replaced
  the local_policy / us_export_policy jars in javahome/jre/lib/security
  more i read, maybe limited by jce.jar
 
  i dont have anything special set for extension jars
 
 
 
  On Mar 12, 2015, at 10:35 AM, Michael Starch starc...@umich.edu
 wrote:
 
  John,
 
  What version of the JDK are you running, and what is your extension
 jars
  environment variable set to.  Do you have the java cryptology jar
  included
  (Oracle JDK usually has this, I don't know if Open JDK does).
 
  Algorithm HmacSHA1 not available is usually thrown when Java cannot
  find
  the java crypto jar used to calculate the given hash.
 
  -Michael
 
  On Thu, Mar 12, 2015 at 9:06 AM, John Reynolds jreyno...@vpicu.net
  wrote:
 
  Hi Lewis,
  using the latest docker buggtb/oodt image, which i assume is .8
  here’s the command i’m running to test the upload
 
  filemgr-client --url http://localhost:9000 --operation
 --ingestProduct
  --productName test --productStructure Flat --productTypeName
 GenericFile
  --metadataFile file:///root/test.txt.met --refs file:///root/test.txt
 
  i verified that i can upload to the path using the s3 tools on the
 box /
  with same credentials i put in the properties file
 
  here’s the full exception returned:
 
  rg.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt
at
 
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:768)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
at
  org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
at org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:139)
at org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:125)
at
 org.apache.xmlrpc.WebServer$Connection.run(WebServer.java:761)
at org.apache.xmlrpc.WebServer$Runner.run(WebServer.java:642)
at java.lang.Thread.run(Thread.java:745)
  Caused by:
  org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt
at
 
 
 org.apache.oodt.cas.filemgr.datatransfer.S3DataTransferer.transferProduct(S3DataTransferer.java:78)
at
 
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:752)
... 12 more
  Caused by: com.amazonaws.AmazonClientException: Unable to calculate a
  request signature: Unable to calculate a request signature: Algorithm
  HmacSHA1 not available
at
 
 
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode(AbstractAWSSigner.java:71)
at
 
 
 com.amazonaws.auth.AbstractAWSSigner.signAndBase64Encode

Re: Amazon S3 data transfer for filemanager

2015-03-12 Thread Michael Starch
John,

I should be more verbose.  The java classpath traditionally did not pick up
multiple jars, so it was super labor intensive to setup.  These days you
can use * inside the classpath to pick up multiple jarsbut it must be
in   because otherwise the shell will glob the * if it is outside of
quotes.

If this doesn't work, try the other recommendations in:


http://stackoverflow.com/questions/219585/setting-multiple-jars-in-java-classpath

-Michael


On Thu, Mar 12, 2015 at 1:48 PM, Michael Starch starc...@umich.edu wrote:

 John,

 Change:
 -classpath $FILEMGR_HOME/lib \
 To:
 -classpath $FILEMGR_HOME/lib/* \

 -Michael


 On Thu, Mar 12, 2015 at 1:37 PM, John Reynolds jreyno...@vpicu.net
 wrote:

 Thanks Michael, if i modify the filemgr-client to look like this (at the
 end)
 $_RUNJAVA $JAVA_OPTS $OODT_OPTS \
   -classpath $FILEMGR_HOME/lib \

 -Dorg.apache.oodt.cas.filemgr.properties=$FILEMGR_HOME/etc/filemgr.properties
 \
   -Djava.util.logging.config.file=$FILEMGR_HOME/etc/logging.properties \

 -Dorg.apache.oodt.cas.cli.action.spring.config=file:$FILEMGR_HOME/policy/cmd-line-actions.xml
 \

 -Dorg.apache.oodt.cas.cli.option.spring.config=file:$FILEMGR_HOME/policy/cmd-line-options.xml
 \
   org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient $@“

 (replacing ext jars with -classpath) then i get
 Error: Could not find or load main class
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManagerClient
 i assume i’m doing something wrong with classpath but not sure what

  On Mar 12, 2015, at 11:31 AM, Michael Starch starc...@umich.edu
 wrote:
 
  John,
 
  Can you open filemgr-client sh script?  It may set the JAVA_EXT_JARS
  there.  If so, it is clobbering the default path for extension jars,
 and
  your java encryption jars are not being picked up. If it does set
  JAVA_EXT_JARS you have two options:
 
  1. Move all your encryption jars into FILEMGR_HOME/lib/
  2. update filemgr-client script to us classpath to specify the jars in
 the
  FILEMGRHOME/lib directory and remove the use of JAVA_EXT_JARS
 
 
  -Michael
 
 
  On Thu, Mar 12, 2015 at 11:12 AM, John Reynolds jreyno...@vpicu.net
 wrote:
 
  Hi Michael
  yeah it’s openjdk 1.7 (“1.7.0_75)
  i did download the the unlimited encryption jar from oracle and
 replaced
  the local_policy / us_export_policy jars in javahome/jre/lib/security
  more i read, maybe limited by jce.jar
 
  i dont have anything special set for extension jars
 
 
 
  On Mar 12, 2015, at 10:35 AM, Michael Starch starc...@umich.edu
 wrote:
 
  John,
 
  What version of the JDK are you running, and what is your extension
 jars
  environment variable set to.  Do you have the java cryptology jar
  included
  (Oracle JDK usually has this, I don't know if Open JDK does).
 
  Algorithm HmacSHA1 not available is usually thrown when Java cannot
  find
  the java crypto jar used to calculate the given hash.
 
  -Michael
 
  On Thu, Mar 12, 2015 at 9:06 AM, John Reynolds jreyno...@vpicu.net
  wrote:
 
  Hi Lewis,
  using the latest docker buggtb/oodt image, which i assume is .8
  here’s the command i’m running to test the upload
 
  filemgr-client --url http://localhost:9000 --operation
 --ingestProduct
  --productName test --productStructure Flat --productTypeName
 GenericFile
  --metadataFile file:///root/test.txt.met --refs file:///root/test.txt
 
  i verified that i can upload to the path using the s3 tools on the
 box /
  with same credentials i put in the properties file
 
  here’s the full exception returned:
 
  rg.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt
at
 
 
 org.apache.oodt.cas.filemgr.system.XmlRpcFileManager.ingestProduct(XmlRpcFileManager.java:768)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.xmlrpc.Invoker.execute(Invoker.java:130)
at
  org.apache.xmlrpc.XmlRpcWorker.invokeHandler(XmlRpcWorker.java:84)
at
 org.apache.xmlrpc.XmlRpcWorker.execute(XmlRpcWorker.java:146)
at
 org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:139)
at
 org.apache.xmlrpc.XmlRpcServer.execute(XmlRpcServer.java:125)
at
 org.apache.xmlrpc.WebServer$Connection.run(WebServer.java:761)
at org.apache.xmlrpc.WebServer$Runner.run(WebServer.java:642)
at java.lang.Thread.run(Thread.java:745)
  Caused by:
  org.apache.oodt.cas.filemgr.structs.exceptions.DataTransferException:
  Failed to upload product reference /root/test.txt to S3 at
  usr/src/oodt/data/archive/test/test.txt

Make Streaming Dependencies Optional

2015-03-02 Thread Michael Starch
Chris, et al,

We briefly had an impromptu discussion about the vast ocean of jars
required by the streaming components, and how to mitigate this effect.

Here is my recommendation:

   -Leave filemgr STREAMING type as-is.
   -Pull all specific streaming implementations into a new top level
project that is not built by default
   -(Optional) put in profiles to standard components' mvn builds that
include streaming components.

I recommend leaving the STREAMING type as is.  This is because this
modification requires no jars, and is a very minimal  change in the
filemanager.  Thus, keeping it has negligible impact, and moving it would
require users to completely replace filemanager code with a streaming
version, not just add-on extra jars.

All of the other modifications (all specific streaming implementations)
were designed to be added on as extension points in OODT.  Therefore, all
of these can be built seperatly when needed and plased in the lib directory
in order to be used.  Thus the ocean of jars would only be included when
desired.

Profiles can be added that build and automatically include these components
if desired.

Any thoughts on this? If not, has a JIRA issue been created or may I create
one?

-Michael


Re: OODT ./filemgr start gives me a error -- Can't load log handler java.util.logging.FileHandler`

2015-02-16 Thread Michael Starch
Aditya,

Did you create a logs directory?  It is missing the directory, so it cannot
create your log files.  Otherwise, it looks ok.  It seems to have started,
any other issues?

Michael


On Mon, Feb 16, 2015 at 9:30 PM, Aditya Dhulipala adhul...@usc.edu wrote:

 Hi,

 I'm trying to run filmgr component of oodt

 I'm running steps off from this guide -
 https://cwiki.apache.org/confluence/display/OODT/OODT+Filemgr+User+Guide

 On executing ./filemgr start I get the following error:-
 http://pastebin.com/cudQPmAa

 Can anybody take a look and help me out?

 Thanks!
 --
 Aditya


 adi



Re: OODT Tests

2015-01-23 Thread Michael Starch
Yeah good work on that one Tom.  I was scratching my head as to why it was
failingand couldn't get it to reproduce locally.

Thanks for the fix.

Michael
On Jan 23, 2015 8:46 AM, Tom Barber tom.bar...@meteorite.bi wrote:

 The only real problem with the build is that the Jersey Client 1.x POM has
 been knackered for god knows how long, but as long as I can remember
 working with REST stuff, and the Hadoop Client jar in the Resource Manager
 has it hardcoded as a dependency. So the other night I forced it to use a
 newer version, so it failed a few times and I stepped through the required
 upgraded jars.

 Apart from that and a few minor tweaks I think its alright.

 Tom

 On 23/01/15 16:41, Lewis John Mcgibbney wrote:

 Hi Folks,
 Builds have been dodgy for a while now.
 Anyone have a clue what happened?
 I just looked at out Jenkins build record for trunk and quite frankly it
 kinda appalling.
 I'm building RADiX for a customer right now then I'm going to take some
 time looking at tests again.
 Lewis




 --
 *Tom Barber* | Technical Director

 meteorite bi
 *T:* +44 20 8133 3730
 *W:* www.meteorite.bi | *Skype:* meteorite.consulting
 *A:* Surrey Technology Centre, Surrey Research Park, Guildford, GU2 7YG, UK



Re: Review Request 28916: Resource Manager To Mesos Cluster Manager Integration

2014-12-19 Thread Michael Starch


 On Dec. 11, 2014, 1:37 a.m., Chris Mattmann wrote:
 
 
 Michael Starch wrote:
 Same thing with Spark, I put the mesos stuff in its own package to keep 
 it seperate. I can easily sort the components into the other packages, if 
 needed.
 
 Chris Mattmann wrote:
 yep please do so thanks Mike

It is done.


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28916/#review64660
---


On Dec. 18, 2014, 3:43 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28916/
 ---
 
 (Updated Dec. 18, 2014, 3:43 p.m.)
 
 
 Review request for oodt and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This patch integrates the mesos cluster manager and OODT resource manager.  
 It allows resource manager jobs to run on a mesos-controlled cluster. Is a 
 solution to: https://issues.apache.org/jira/browse/OODT-699 and includes: 
 https://reviews.apache.org/r/27773/
 
 
 Diffs
 -
 
   trunk/resource/pom.xml 1646471 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManager.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManagerFactory.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/ResourceExecutor.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitor.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitorFactory.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/ResourceMesosScheduler.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/ResourceMesosSchedulerFactory.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/JobSpecSerializer.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/exceptions/MesosFrameworkException.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/util/MesosUtilities.java
  PRE-CREATION 
   
 trunk/resource/src/test/org/apache/oodt/cas/resource/util/TestMesosUtilities.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/28916/diff/
 
 
 Testing
 ---
 
 Basic unit testing.
 
 Integration/prototye testing has also been done. It starts and runs.
 
 
 Thanks,
 
 Michael Starch
 




Shipping Code Changes.txt and 0.8 Snapshot

2014-12-18 Thread Michael Starch
All,

With the pending 0.8 Snapshot, should I be editing CHANGES.txt to reflect
0.9 development as I prepare to ship code? Has an 0.8 branch/tag been
created?  Or should I hold of new changes until a later point?

-Michael


Re: Review Request 28916: Resource Manager To Mesos Cluster Manager Integration

2014-12-18 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28916/
---

(Updated Dec. 18, 2014, 3:43 p.m.)


Review request for oodt and Chris Mattmann.


Changes
---

Updating the diff to sort components.


Repository: oodt


Description
---

This patch integrates the mesos cluster manager and OODT resource manager.  It 
allows resource manager jobs to run on a mesos-controlled cluster. Is a 
solution to: https://issues.apache.org/jira/browse/OODT-699 and includes: 
https://reviews.apache.org/r/27773/


Diffs (updated)
-

  trunk/resource/pom.xml 1646471 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManagerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/ResourceExecutor.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitor.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitorFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/ResourceMesosScheduler.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/ResourceMesosSchedulerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/JobSpecSerializer.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/exceptions/MesosFrameworkException.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/util/MesosUtilities.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/util/TestMesosUtilities.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/28916/diff/


Testing
---

Basic unit testing.

Integration/prototye testing has also been done. It starts and runs.


Thanks,

Michael Starch



Re: Review Request 28917: Spark Backend to Resource Manager

2014-12-17 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28917/
---

(Updated Dec. 17, 2014, 7:13 p.m.)


Review request for oodt and Chris Mattmann.


Changes
---

Sorted packages


Repository: oodt


Description
---

This change allows Spark jobs to be run in the resource manager.  It integrates 
with a generic Spark daemon, so spark can be run locally, as a test spark 
cluster, and in mesos (not affiliated with the resouce manager mesos 
integration).

This is a solution to: https://issues.apache.org/jira/browse/OODT-780


Diffs (updated)
-

  trunk/core/pom.xml 1644527 
  trunk/resource/pom.xml 1644527 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/NoSparkFilePalindromeExample.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/PalindromeUtils.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/SparkFilePalindromeExample.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/StreamingPalindromeExample.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/SparkScheduler.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/SparkSchedulerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/SparkInstance.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/StreamingInstance.java
 PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exPalindrome.xml PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exSparkJob.xml PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exSparkPalindrome.xml 
PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exStreamingPalindrome.xml 
PRE-CREATION 
  trunk/resource/src/main/resources/resource.properties 1644527 
  
trunk/resource/src/main/scala/org/apache/oodt/cas/resource/examples/ScalaHelloWorld.scala
 PRE-CREATION 

Diff: https://reviews.apache.org/r/28917/diff/


Testing
---

Demonstration created, demonstration jobs run and benchmarks taken.


Thanks,

Michael Starch



Re: Review Request 28917: Spark Backend to Resource Manager

2014-12-17 Thread Michael Starch


 On Dec. 11, 2014, 1:39 a.m., Chris Mattmann wrote:
  trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/StreamingInstance.java,
   line 1
  https://reviews.apache.org/r/28917/diff/1/?file=788649#file788649line1
 
  batchmgr pkg

Put in structs like SparkInstance.  This is where it belongs, as it is a struct.


 On Dec. 11, 2014, 1:39 a.m., Chris Mattmann wrote:
  trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/PalindromeUtils.java,
   line 1
  https://reviews.apache.org/r/28917/diff/1/?file=788651#file788651line1
 
  o.a.oodt.cas.resource.util

Placed in Examples, as it is shared-code for the examples.  It is only intended 
to support examples.  If the class needs to be renamed, that can be done.


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28917/#review64662
---


On Dec. 17, 2014, 7:13 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28917/
 ---
 
 (Updated Dec. 17, 2014, 7:13 p.m.)
 
 
 Review request for oodt and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This change allows Spark jobs to be run in the resource manager.  It 
 integrates with a generic Spark daemon, so spark can be run locally, as a 
 test spark cluster, and in mesos (not affiliated with the resouce manager 
 mesos integration).
 
 This is a solution to: https://issues.apache.org/jira/browse/OODT-780
 
 
 Diffs
 -
 
   trunk/core/pom.xml 1644527 
   trunk/resource/pom.xml 1644527 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/NoSparkFilePalindromeExample.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/PalindromeUtils.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/SparkFilePalindromeExample.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/examples/StreamingPalindromeExample.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/SparkScheduler.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/SparkSchedulerFactory.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/SparkInstance.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/StreamingInstance.java
  PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exPalindrome.xml 
 PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exSparkJob.xml PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exSparkPalindrome.xml 
 PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exStreamingPalindrome.xml 
 PRE-CREATION 
   trunk/resource/src/main/resources/resource.properties 1644527 
   
 trunk/resource/src/main/scala/org/apache/oodt/cas/resource/examples/ScalaHelloWorld.scala
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/28917/diff/
 
 
 Testing
 ---
 
 Demonstration created, demonstration jobs run and benchmarks taken.
 
 
 Thanks,
 
 Michael Starch
 




Re: Review Request 28917: Spark Backend to Resource Manager

2014-12-11 Thread Michael Starch


 On Dec. 11, 2014, 1:39 a.m., Chris Mattmann wrote:
 

I put all of the Spark stuff in a spark package so that it is seperated from 
the rest of the resource manager. It is an inter-dependent set of code. Do you 
really want me to sort its components out in to the various other 
subdirectories?  If so, I can.


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28917/#review64662
---


On Dec. 10, 2014, 9:38 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/28917/
 ---
 
 (Updated Dec. 10, 2014, 9:38 p.m.)
 
 
 Review request for oodt and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This change allows Spark jobs to be run in the resource manager.  It 
 integrates with a generic Spark daemon, so spark can be run locally, as a 
 test spark cluster, and in mesos (not affiliated with the resouce manager 
 mesos integration).
 
 This is a solution to: https://issues.apache.org/jira/browse/OODT-780
 
 
 Diffs
 -
 
   trunk/core/pom.xml 1644524 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/SparkInstance.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/SparkScheduler.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/SparkSchedulerFactory.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/StreamingInstance.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/NoSparkFilePalindromeExample.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/PalindromeUtils.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/SparkFilePalindromeExample.java
  PRE-CREATION 
   
 trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/StreamingPalindromeExample.java
  PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exPalindrome.xml 
 PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exSparkJob.xml PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exSparkPalindrome.xml 
 PRE-CREATION 
   trunk/resource/src/main/resources/examples/jobs/exStreamingPalindrome.xml 
 PRE-CREATION 
   trunk/resource/src/main/resources/resource.properties 1644524 
   
 trunk/resource/src/main/scala/org/apache/oodt/cas/resource/examples/ScalaHelloWorld.scala
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/28917/diff/
 
 
 Testing
 ---
 
 Demonstration created, demonstration jobs run and benchmarks taken.
 
 
 Thanks,
 
 Michael Starch
 




Review Request 28917: Spark Backend to Resource Manager

2014-12-10 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28917/
---

Review request for oodt and Chris Mattmann.


Repository: oodt


Description
---

This change allows Spark jobs to be run in the resource manager.  It integrates 
with a generic Spark daemon, so spark can be run locally, as a test spark 
cluster, and in mesos (not affiliated with the resouce manager mesos 
integration).

This is a solution to: https://issues.apache.org/jira/browse/OODT-780


Diffs
-

  trunk/core/pom.xml 1644524 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/SparkInstance.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/SparkScheduler.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/SparkSchedulerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/StreamingInstance.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/NoSparkFilePalindromeExample.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/PalindromeUtils.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/SparkFilePalindromeExample.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/spark/examples/StreamingPalindromeExample.java
 PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exPalindrome.xml PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exSparkJob.xml PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exSparkPalindrome.xml 
PRE-CREATION 
  trunk/resource/src/main/resources/examples/jobs/exStreamingPalindrome.xml 
PRE-CREATION 
  trunk/resource/src/main/resources/resource.properties 1644524 
  
trunk/resource/src/main/scala/org/apache/oodt/cas/resource/examples/ScalaHelloWorld.scala
 PRE-CREATION 

Diff: https://reviews.apache.org/r/28917/diff/


Testing
---

Demonstration created, demonstration jobs run and benchmarks taken.


Thanks,

Michael Starch



Re: Review Request 22791: Streaming OODT Changes

2014-12-10 Thread Michael Starch
/apache/oodt/cas/resource/monitor/MesosMonitor.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitorFactory.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/Scheduler.java
 1617800 
  http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/proto/resc.proto 
PRE-CREATION 
  http://svn.apache.org/repos/asf/oodt/trunk/streamer/pom.xml PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/assembly/assembly.xml
 PRE-CREATION 
  http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/bin/streamer 
PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/publisher/KafkaPublisher.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/publisher/Publisher.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/InputStreamReader.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/Reader.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/StreamEmptyException.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/streams/MultiFileSequentialInputStream.java.bak
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/streams/MultiFileSequentialInputStreamArcheaic.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/system/MultiSourceStreamer.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/cmd-line-actions.xml
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/cmd-line-options.xml
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/logging.properties
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/streamer.properties
 PRE-CREATION 

Diff: https://reviews.apache.org/r/22791/diff/


Testing
---

Basic functionality tests done for both the resource-manger and workflow 
manager pieces.  Filemanager have been tested to properly ingest a 
GenericStream type with the lucene catalog only.


Thanks,

Michael Starch



Review Request 28916: Resource Manager To Mesos Cluster Manager Integration

2014-12-10 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28916/
---

Review request for oodt and Chris Mattmann.


Repository: oodt


Description
---

This patch integrates the mesos cluster manager and OODT resource manager.  It 
allows resource manager jobs to run on a mesos-controlled cluster. Is a 
solution to: https://issues.apache.org/jira/browse/OODT-699 and includes: 
https://reviews.apache.org/r/27773/


Diffs
-

  trunk/resource/pom.xml 1644494 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/JobSpecSerializer.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/MesosBatchManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/MesosBatchManagerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/MesosMonitor.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/MesosMonitorFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/MesosUtilities.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/ResourceExecutor.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/ResourceMesosFrameworkFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/ResourceMesosScheduler.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/exception/MesosFrameworkException.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mesos/TestMesosUtilities.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/28916/diff/


Testing
---

Basic unit testing.

Integration/prototye testing has also been done. It starts and runs.


Thanks,

Michael Starch



Confluence Wiki Edit Permissions

2014-11-25 Thread Michael Starch
All,

How do I get edit permissions to the OODT Confluence WIKI hosted at Apache?

-Michael


Re: Extra Compiler Tools

2014-11-07 Thread Michael Starch
Chris,

Did you ever come to a conclusion on this?

-Michael

On Wed, Nov 5, 2014 at 7:14 PM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 OK, let me think about this tonight.
 Maybe we can figure this out tomorrow,
 I won’t hold this up longer than that.

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 8:23 PM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 According to the specs, any subclass that implements Serializable must
 manually implement the serialization of the parents' members.  I tested
 this and it fails exactly as expected.  The parent's members aren't
 serialized.
 
 Also, JobInput is an interface so I would have no way of catching all
 of the possible implementations that could come at me.
 
 Michael
 On Nov 5, 2014 7:06 PM, Mattmann, Chris A (3980) 
 chris.a.mattm...@jpl.nasa.gov wrote:
 
  Got it, Mike.
 
  Hmm, how about simply creating SerializableJobSpec and
  SerializableJob and SerializableJobInput and then making
  them sub-class their parents and implement Serializable.
  Then, use these classes in your Mesos implementation.
  That seems self-contained, doesn’t change core classes,
  and pretty easy, right?
 
  Cheers,
  Chris
 
 
  ++
  Chris Mattmann, Ph.D.
  Chief Architect
  Instrument Software and Science Data Systems Section (398)
  NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
  Office: 168-519, Mailstop: 168-527
  Email: chris.a.mattm...@nasa.gov
  WWW:  http://sunset.usc.edu/~mattmann/
  ++
  Adjunct Associate Professor, Computer Science Department
  University of Southern California, Los Angeles, CA 90089 USA
  ++
 
 
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 8:00 PM
  To: dev@oodt.apache.org dev@oodt.apache.org
  Subject: Re: Extra Compiler Tools
 
  I need to serialize a JobSpec and children (Job and JobInput) to a
 byte[].
  Java can do this automatically by marking all three as Serializable.
  
  The work around is to manually serialize to a private inner struct and
  back
  out again.  The inner class will have members for each member in the
  JobSpec and children.  Java can the auto-serialize that without
 changing
  the other three.
  
  It is ugly, and essentially a reimplementation of those three
  classesbut it is entirely self-contained.
  
  Michael
  On Nov 5, 2014 6:45 PM, Chris Mattmann chris.mattm...@gmail.com
  wrote:
  
   Hey Mike,
  
   Hmm, what’s the work around just so I know
   what we’re trading against?
  
   Cheers,
   Chris
  
   
   Chris Mattmann
   chris.mattm...@gmail.com
  
  
  
  
   -Original Message-
   From: Michael Starch starc...@umich.edu
   Reply-To: dev@oodt.apache.org
   Date: Wednesday, November 5, 2014 at 6:31 PM
   To: dev@oodt.apache.org
   Subject: Re: Extra Compiler Tools
  
   That is basically what I did. Regardless, protobuff proves to be
  overkill.
   
   If I mark those classes as serializable, the correct solution is 2
  lines
   of
   code.  (protobuff was like 20).  Wrote a test case, and it works
   perfectly.
   
   If I cannot make JobSpec Job and JonInput implement Serializable
 then
  the
   work around is simple too.
   
   What do you think?  Should I mark them as Serializable, or use a
   work-around.  Either is a better solution than protobuff.
   
   Michael
   On Nov 5, 2014 4:44 PM, Chris Mattmann chris.mattm...@gmail.com
   wrote:
   
Mike, have you looked at this yet?
   
   
   
  
  
 
 
 http://techtraits.com/build%20management/maven/2011/09/09/compiling-proto
   co
l-buffers-from-maven/
   
   
I’m going to play with it tonight and see if
I can help here. Do you have some files I can test
with? Can you attach them to JIRA or dropbox them to me
so I can scope?
   
Cheers,
Chris
   

Chris Mattmann
chris.mattm...@gmail.com
   
   
   
   
-Original Message-
From: Michael Starch starc...@umich.edu
Reply-To: dev@oodt.apache.org
Date: Wednesday, November 5, 2014 at 5:37 PM
To: dev@oodt.apache.org
Subject: Re: Extra Compiler Tools
   
Oktime for an audible.  Protoc needs to be built from
 source, no
binary
distributions available.  Thus I am going to purge proto-buffers
  from
   the
new code and be done with it.

Any problem making the following classes/interfaces implement
java.io.Serializable:

JobSpec
Job
JobInput

Doing so would allow apache and native java serialization and
 thus
  we
wouldn't need

Re: Extra Compiler Tools

2014-11-07 Thread Michael Starch
Sound fine, expect to see this on review board when I get it done and
passing my test.

Michael
On Nov 7, 2014 9:44 AM, Chris Mattmann chris.mattm...@gmail.com wrote:

 Yep I’d like to see the non changing
 core structs version, in a patch to
 evaluate it. You said you know how, so let’s see it
 and then go from there. We can reserve 0.9 to
 consider changing structs if the code becomes
 too unwieldily to maintain.

 Good?

 Cheers,
 Chris

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Friday, November 7, 2014 at 9:21 AM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 Chris,
 
 Did you ever come to a conclusion on this?
 
 -Michael
 
 On Wed, Nov 5, 2014 at 7:14 PM, Chris Mattmann chris.mattm...@gmail.com
 wrote:
 
  OK, let me think about this tonight.
  Maybe we can figure this out tomorrow,
  I won’t hold this up longer than that.
 
  
  Chris Mattmann
  chris.mattm...@gmail.com
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 8:23 PM
  To: dev@oodt.apache.org
  Subject: Re: Extra Compiler Tools
 
  According to the specs, any subclass that implements Serializable must
  manually implement the serialization of the parents' members.  I tested
  this and it fails exactly as expected.  The parent's members aren't
  serialized.
  
  Also, JobInput is an interface so I would have no way of catching
 all
  of the possible implementations that could come at me.
  
  Michael
  On Nov 5, 2014 7:06 PM, Mattmann, Chris A (3980) 
  chris.a.mattm...@jpl.nasa.gov wrote:
  
   Got it, Mike.
  
   Hmm, how about simply creating SerializableJobSpec and
   SerializableJob and SerializableJobInput and then making
   them sub-class their parents and implement Serializable.
   Then, use these classes in your Mesos implementation.
   That seems self-contained, doesn’t change core classes,
   and pretty easy, right?
  
   Cheers,
   Chris
  
  
   ++
   Chris Mattmann, Ph.D.
   Chief Architect
   Instrument Software and Science Data Systems Section (398)
   NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
   Office: 168-519, Mailstop: 168-527
   Email: chris.a.mattm...@nasa.gov
   WWW:  http://sunset.usc.edu/~mattmann/
   ++
   Adjunct Associate Professor, Computer Science Department
   University of Southern California, Los Angeles, CA 90089 USA
   ++
  
  
  
  
  
  
   -Original Message-
   From: Michael Starch starc...@umich.edu
   Reply-To: dev@oodt.apache.org dev@oodt.apache.org
   Date: Wednesday, November 5, 2014 at 8:00 PM
   To: dev@oodt.apache.org dev@oodt.apache.org
   Subject: Re: Extra Compiler Tools
  
   I need to serialize a JobSpec and children (Job and JobInput) to a
  byte[].
   Java can do this automatically by marking all three as Serializable.
   
   The work around is to manually serialize to a private inner struct
 and
   back
   out again.  The inner class will have members for each member in the
   JobSpec and children.  Java can the auto-serialize that without
  changing
   the other three.
   
   It is ugly, and essentially a reimplementation of those three
   classesbut it is entirely self-contained.
   
   Michael
   On Nov 5, 2014 6:45 PM, Chris Mattmann chris.mattm...@gmail.com
   wrote:
   
Hey Mike,
   
Hmm, what’s the work around just so I know
what we’re trading against?
   
Cheers,
Chris
   

Chris Mattmann
chris.mattm...@gmail.com
   
   
   
   
-Original Message-
From: Michael Starch starc...@umich.edu
Reply-To: dev@oodt.apache.org
Date: Wednesday, November 5, 2014 at 6:31 PM
To: dev@oodt.apache.org
Subject: Re: Extra Compiler Tools
   
That is basically what I did. Regardless, protobuff proves to be
   overkill.

If I mark those classes as serializable, the correct solution is
 2
   lines
of
code.  (protobuff was like 20).  Wrote a test case, and it works
perfectly.

If I cannot make JobSpec Job and JonInput implement Serializable
  then
   the
work around is simple too.

What do you think?  Should I mark them as Serializable, or use a
work-around.  Either is a better solution than protobuff.

Michael
On Nov 5, 2014 4:44 PM, Chris Mattmann
 chris.mattm...@gmail.com
wrote:

 Mike, have you looked at this yet?



   
   
  
  
 
 
 http://techtraits.com/build%20management/maven/2011/09/09/compiling-proto
co
 l-buffers-from-maven/


 I’m going to play with it tonight and see if
 I can help here. Do you have some files

Re: Extra Compiler Tools

2014-11-07 Thread Michael Starch
Chris, Et Al.

The solution write-up can be found here.  Turns out it isn't too
uglyjust has a lot of duplication with the classes it is wrapping.

https://reviews.apache.org/r/27773/

Enjoy,

-Michael

On Fri, Nov 7, 2014 at 9:46 AM, Michael Starch starc...@umich.edu wrote:

 Sound fine, expect to see this on review board when I get it done and
 passing my test.

 Michael
 On Nov 7, 2014 9:44 AM, Chris Mattmann chris.mattm...@gmail.com wrote:

 Yep I’d like to see the non changing
 core structs version, in a patch to
 evaluate it. You said you know how, so let’s see it
 and then go from there. We can reserve 0.9 to
 consider changing structs if the code becomes
 too unwieldily to maintain.

 Good?

 Cheers,
 Chris

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Friday, November 7, 2014 at 9:21 AM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 Chris,
 
 Did you ever come to a conclusion on this?
 
 -Michael
 
 On Wed, Nov 5, 2014 at 7:14 PM, Chris Mattmann chris.mattm...@gmail.com
 
 wrote:
 
  OK, let me think about this tonight.
  Maybe we can figure this out tomorrow,
  I won’t hold this up longer than that.
 
  
  Chris Mattmann
  chris.mattm...@gmail.com
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 8:23 PM
  To: dev@oodt.apache.org
  Subject: Re: Extra Compiler Tools
 
  According to the specs, any subclass that implements Serializable must
  manually implement the serialization of the parents' members.  I
 tested
  this and it fails exactly as expected.  The parent's members aren't
  serialized.
  
  Also, JobInput is an interface so I would have no way of catching
 all
  of the possible implementations that could come at me.
  
  Michael
  On Nov 5, 2014 7:06 PM, Mattmann, Chris A (3980) 
  chris.a.mattm...@jpl.nasa.gov wrote:
  
   Got it, Mike.
  
   Hmm, how about simply creating SerializableJobSpec and
   SerializableJob and SerializableJobInput and then making
   them sub-class their parents and implement Serializable.
   Then, use these classes in your Mesos implementation.
   That seems self-contained, doesn’t change core classes,
   and pretty easy, right?
  
   Cheers,
   Chris
  
  
   ++
   Chris Mattmann, Ph.D.
   Chief Architect
   Instrument Software and Science Data Systems Section (398)
   NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
   Office: 168-519, Mailstop: 168-527
   Email: chris.a.mattm...@nasa.gov
   WWW:  http://sunset.usc.edu/~mattmann/
   ++
   Adjunct Associate Professor, Computer Science Department
   University of Southern California, Los Angeles, CA 90089 USA
   ++
  
  
  
  
  
  
   -Original Message-
   From: Michael Starch starc...@umich.edu
   Reply-To: dev@oodt.apache.org dev@oodt.apache.org
   Date: Wednesday, November 5, 2014 at 8:00 PM
   To: dev@oodt.apache.org dev@oodt.apache.org
   Subject: Re: Extra Compiler Tools
  
   I need to serialize a JobSpec and children (Job and JobInput) to a
  byte[].
   Java can do this automatically by marking all three as
 Serializable.
   
   The work around is to manually serialize to a private inner struct
 and
   back
   out again.  The inner class will have members for each member in
 the
   JobSpec and children.  Java can the auto-serialize that without
  changing
   the other three.
   
   It is ugly, and essentially a reimplementation of those three
   classesbut it is entirely self-contained.
   
   Michael
   On Nov 5, 2014 6:45 PM, Chris Mattmann chris.mattm...@gmail.com
 
   wrote:
   
Hey Mike,
   
Hmm, what’s the work around just so I know
what we’re trading against?
   
Cheers,
Chris
   

Chris Mattmann
chris.mattm...@gmail.com
   
   
   
   
-Original Message-
From: Michael Starch starc...@umich.edu
Reply-To: dev@oodt.apache.org
Date: Wednesday, November 5, 2014 at 6:31 PM
To: dev@oodt.apache.org
Subject: Re: Extra Compiler Tools
   
That is basically what I did. Regardless, protobuff proves to be
   overkill.

If I mark those classes as serializable, the correct solution is
 2
   lines
of
code.  (protobuff was like 20).  Wrote a test case, and it works
perfectly.

If I cannot make JobSpec Job and JonInput implement Serializable
  then
   the
work around is simple too.

What do you think?  Should I mark them as Serializable, or use a
work-around.  Either is a better solution than protobuff.

Michael
On Nov 5, 2014 4:44 PM, Chris Mattmann
 chris.mattm...@gmail.com
wrote

Extra Compiler Tools

2014-11-05 Thread Michael Starch
All,

I am trying to integrate apache-mesos with our resource manager. However,
mesos uses a technology called protobuff from Google for
marshaling/unmarshaling data.

This requires running a tool called protoc to generate a source file in
java.  What is the best way to integrate this step into our build process?

Options I can conceive of:
   -Check in generated java file
   -Require protoc installation to build resource manager
   -Separate extra resource package into new module

None of these ideas are very clean.

Any other ideas?  I tried setting up a profile to only compile these
sources when selected, but that turned out not to work.

-Michael Starch


Re: Extra Compiler Tools

2014-11-05 Thread Michael Starch
I tried this approach. The plugin requires a path to the protoc tool and
thus a working installation.  This is what prompted the discussion.

Running the plugin under a profile works.  However, not running the plugin
causes compile errors in dependant code.  Excluding this code except within
the profile doesn't seem to work, and is considered by some to be bad form
because there is nothing inside the jar file that notes which profiles were
used to compile.

Any ideas on how to continue?

Michael
 On Nov 5, 2014 11:04 AM, Chris Mattmann chris.mattm...@gmail.com wrote:

 Hi Mike,

 Great discussion. It would be nice if there was
 a protoc Maven plugin:

 http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html


 Looks like there is. My suggestion:

 1. use a Profile, something like -Pwith-mesos and
 then when activated;
 2. call the above plugin if -Pwith-mesos is activated
 in the resource manager

 Sound good?

 Cheers,
 Chris

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 11:46 AM
 To: dev@oodt.apache.org
 Subject: Extra Compiler Tools

 All,
 
 I am trying to integrate apache-mesos with our resource manager. However,
 mesos uses a technology called protobuff from Google for
 marshaling/unmarshaling data.
 
 This requires running a tool called protoc to generate a source file in
 java.  What is the best way to integrate this step into our build process?
 
 Options I can conceive of:
-Check in generated java file
-Require protoc installation to build resource manager
-Separate extra resource package into new module
 
 None of these ideas are very clean.
 
 Any other ideas?  I tried setting up a profile to only compile these
 sources when selected, but that turned out not to work.
 
 -Michael Starch





Re: Extra Compiler Tools

2014-11-05 Thread Michael Starch
I tried this approach. The plugin requires a path to the protoc tool and
thus a working installation.  This is what prompted the discussion.

Running the plugin under a profile works.  However, not running the plugin
causes compile errors in dependant code.  Excluding this code except within
the profile doesn't seem to work, and is considered by some to be bad form
because there is nothing inside the jar file that notes which profiles were
used to compile.

Any ideas on how to continue?

Michael
 On Nov 5, 2014 11:04 AM, Chris Mattmann chris.mattm...@gmail.com wrote:

 Hi Mike,

 Great discussion. It would be nice if there was
 a protoc Maven plugin:

 http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html


 Looks like there is. My suggestion:

 1. use a Profile, something like -Pwith-mesos and
 then when activated;
 2. call the above plugin if -Pwith-mesos is activated
 in the resource manager

 Sound good?

 Cheers,
 Chris

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 11:46 AM
 To: dev@oodt.apache.org
 Subject: Extra Compiler Tools

 All,
 
 I am trying to integrate apache-mesos with our resource manager. However,
 mesos uses a technology called protobuff from Google for
 marshaling/unmarshaling data.
 
 This requires running a tool called protoc to generate a source file in
 java.  What is the best way to integrate this step into our build process?
 
 Options I can conceive of:
-Check in generated java file
-Require protoc installation to build resource manager
-Separate extra resource package into new module
 
 None of these ideas are very clean.
 
 Any other ideas?  I tried setting up a profile to only compile these
 sources when selected, but that turned out not to work.
 
 -Michael Starch





Re: Extra Compiler Tools

2014-11-05 Thread Michael Starch
Looks like you followed the same reasoning chain that I did.  Yes, I came
to the same conclusion that ant-build was best.

I wasn't sure how to download protoc, but you just answered thatso I
think this is a great solution!

Thanks,

Michael


On Wed, Nov 5, 2014 at 10:23 AM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 Hi Mike,

 Thanks for flushing this out.

 My thoughts on the below:


 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 12:12 PM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 I tried this approach. The plugin requires a path to the protoc tool and
 thus a working installation.  This is what prompted the discussion.

 Ah - no worries, what you could do is:

 1. only enable to plugin if -Pwith-mesos is enabled; and

 
 Running the plugin under a profile works.

 Yep.

  However, not running the plugin
 causes compile errors in dependant code.  Excluding this code except
 within
 the profile doesn't seem to work, and is considered by some to be bad form
 because there is nothing inside the jar file that notes which profiles
 were
 used to compile.

 Got it. Suggestion here would be:

 2. create a new module, cas-resource-mesos, and inside of that module,
 take one of the following approaches, assuming the module is activated
 when -Pwith-mesos is enabled:

 2a. Maven Antrun like so (in this old example):
 http://stackoverflow.com/questions/1578456/integrate-protocol-buffers-into-
 maven2-build

 (pro: more flexibility in case protoc isn¹t there; to fail on error; to
 only compile if
 protoc is available

 2b. Maven protobuf plugin
 http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html

 Here¹s how to enable a module with a profile:

 http://blog.soebes.de/blog/2013/11/09/why-is-it-bad-to-activate-slash-deact
 ive-modules-by-profiles-in-maven/


 It seems like that is a bad idea though, based on that discussion.

 So, here¹s another option:

 1. Inside of cas-resource (no special new module or anything else)
 2. include some custom Ant magic via a build.xml file and the Maven
 AntRun plugin:
   2a. test if protoc is on the system path, and if not, download it, e.g.,
 into the target directory (gets deleted on clean)
   2b. call protoc and compile after 2a

 I would suggest this solution as I think it¹s the most robust and ensures
 we always have a cas-resource that includes mesos and compiled correctly.

 Cheers,
 Chris

 
 Any ideas on how to continue?
 
 Michael
  On Nov 5, 2014 11:04 AM, Chris Mattmann chris.mattm...@gmail.com
 wrote:
 
  Hi Mike,
 
  Great discussion. It would be nice if there was
  a protoc Maven plugin:
 
  http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html
 
 
  Looks like there is. My suggestion:
 
  1. use a Profile, something like -Pwith-mesos and
  then when activated;
  2. call the above plugin if -Pwith-mesos is activated
  in the resource manager
 
  Sound good?
 
  Cheers,
  Chris
 
  
  Chris Mattmann
  chris.mattm...@gmail.com
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 11:46 AM
  To: dev@oodt.apache.org
  Subject: Extra Compiler Tools
 
  All,
  
  I am trying to integrate apache-mesos with our resource manager.
 However,
  mesos uses a technology called protobuff from Google for
  marshaling/unmarshaling data.
  
  This requires running a tool called protoc to generate a source file
 in
  java.  What is the best way to integrate this step into our build
 process?
  
  Options I can conceive of:
 -Check in generated java file
 -Require protoc installation to build resource manager
 -Separate extra resource package into new module
  
  None of these ideas are very clean.
  
  Any other ideas?  I tried setting up a profile to only compile these
  sources when selected, but that turned out not to work.
  
  -Michael Starch
 
 
 





Re: OODT Source Code

2014-11-05 Thread Michael Starch
Check out our SVN repository: https://svn.apache.org/repos/asf/oodt/trunk

This points to trunk, our active-development area.

-Michael

On Tue, Nov 4, 2014 at 5:08 AM, Paulo Silveira pauload...@gmail.com wrote:

 Dear all,

 I`m a PhD studying the modularity of different ecosystems. I intend to
 analyze the OODT code through code metrics, however I need the java
 projects for each version (release). Where can I download these projects to
 be analyzed by a metric tool?

 Thanks in advance,
 --
 Paulo Silveira
 Ph.D. Candidate at Federal University of Pernambuco (UFPE)



Re: Extra Compiler Tools

2014-11-05 Thread Michael Starch
Oktime for an audible.  Protoc needs to be built from source, no binary
distributions available.  Thus I am going to purge proto-buffers from the
new code and be done with it.

Any problem making the following classes/interfaces implement
java.io.Serializable:

JobSpec
Job
JobInput

Doing so would allow apache and native java serialization and thus we
wouldn't need something like proto-buffers.

-Michael
Thanks Mike +1


Chris Mattmann
chris.mattm...@gmail.com




-Original Message-
From: Michael Starch starc...@umich.edu
Reply-To: dev@oodt.apache.org
Date: Wednesday, November 5, 2014 at 12:31 PM
To: dev@oodt.apache.org
Subject: Re: Extra Compiler Tools

Looks like you followed the same reasoning chain that I did.  Yes, I came
to the same conclusion that ant-build was best.

I wasn't sure how to download protoc, but you just answered thatso I
think this is a great solution!

Thanks,

Michael


On Wed, Nov 5, 2014 at 10:23 AM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 Hi Mike,

 Thanks for flushing this out.

 My thoughts on the below:


 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 12:12 PM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 I tried this approach. The plugin requires a path to the protoc tool
and
 thus a working installation.  This is what prompted the discussion.

 Ah - no worries, what you could do is:

 1. only enable to plugin if -Pwith-mesos is enabled; and

 
 Running the plugin under a profile works.

 Yep.

  However, not running the plugin
 causes compile errors in dependant code.  Excluding this code except
 within
 the profile doesn't seem to work, and is considered by some to be bad
form
 because there is nothing inside the jar file that notes which profiles
 were
 used to compile.

 Got it. Suggestion here would be:

 2. create a new module, cas-resource-mesos, and inside of that module,
 take one of the following approaches, assuming the module is activated
 when -Pwith-mesos is enabled:

 2a. Maven Antrun like so (in this old example):

http://stackoverflow.com/questions/1578456/integrate-protocol-buffers-int
o-
 maven2-build

 (pro: more flexibility in case protoc isn¹t there; to fail on error; to
 only compile if
 protoc is available

 2b. Maven protobuf plugin
 http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html

 Here¹s how to enable a module with a profile:


http://blog.soebes.de/blog/2013/11/09/why-is-it-bad-to-activate-slash-dea
ct
 ive-modules-by-profiles-in-maven/


 It seems like that is a bad idea though, based on that discussion.

 So, here¹s another option:

 1. Inside of cas-resource (no special new module or anything else)
 2. include some custom Ant magic via a build.xml file and the Maven
 AntRun plugin:
   2a. test if protoc is on the system path, and if not, download it,
e.g.,
 into the target directory (gets deleted on clean)
   2b. call protoc and compile after 2a

 I would suggest this solution as I think it¹s the most robust and
ensures
 we always have a cas-resource that includes mesos and compiled
correctly.

 Cheers,
 Chris

 
 Any ideas on how to continue?
 
 Michael
  On Nov 5, 2014 11:04 AM, Chris Mattmann chris.mattm...@gmail.com
 wrote:
 
  Hi Mike,
 
  Great discussion. It would be nice if there was
  a protoc Maven plugin:
 
  http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html
 
 
  Looks like there is. My suggestion:
 
  1. use a Profile, something like -Pwith-mesos and
  then when activated;
  2. call the above plugin if -Pwith-mesos is activated
  in the resource manager
 
  Sound good?
 
  Cheers,
  Chris
 
  
  Chris Mattmann
  chris.mattm...@gmail.com
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 11:46 AM
  To: dev@oodt.apache.org
  Subject: Extra Compiler Tools
 
  All,
  
  I am trying to integrate apache-mesos with our resource manager.
 However,
  mesos uses a technology called protobuff from Google for
  marshaling/unmarshaling data.
  
  This requires running a tool called protoc to generate a source
file
 in
  java.  What is the best way to integrate this step into our build
 process?
  
  Options I can conceive of:
 -Check in generated java file
 -Require protoc installation to build resource manager
 -Separate extra resource package into new module
  
  None of these ideas are very clean.
  
  Any other ideas?  I tried setting up a profile to only compile these
  sources when selected, but that turned out not to work.
  
  -Michael Starch
 
 
 





Re: Extra Compiler Tools

2014-11-05 Thread Michael Starch
That is basically what I did. Regardless, protobuff proves to be overkill.

If I mark those classes as serializable, the correct solution is 2 lines of
code.  (protobuff was like 20).  Wrote a test case, and it works perfectly.

If I cannot make JobSpec Job and JonInput implement Serializable then the
work around is simple too.

What do you think?  Should I mark them as Serializable, or use a
work-around.  Either is a better solution than protobuff.

Michael
On Nov 5, 2014 4:44 PM, Chris Mattmann chris.mattm...@gmail.com wrote:

 Mike, have you looked at this yet?

 http://techtraits.com/build%20management/maven/2011/09/09/compiling-protoco
 l-buffers-from-maven/


 I’m going to play with it tonight and see if
 I can help here. Do you have some files I can test
 with? Can you attach them to JIRA or dropbox them to me
 so I can scope?

 Cheers,
 Chris

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 5:37 PM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 Oktime for an audible.  Protoc needs to be built from source, no
 binary
 distributions available.  Thus I am going to purge proto-buffers from the
 new code and be done with it.
 
 Any problem making the following classes/interfaces implement
 java.io.Serializable:
 
 JobSpec
 Job
 JobInput
 
 Doing so would allow apache and native java serialization and thus we
 wouldn't need something like proto-buffers.
 
 -Michael
 Thanks Mike +1
 
 
 Chris Mattmann
 chris.mattm...@gmail.com
 
 
 
 
 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 12:31 PM
 To: dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools
 
 Looks like you followed the same reasoning chain that I did.  Yes, I came
 to the same conclusion that ant-build was best.
 
 I wasn't sure how to download protoc, but you just answered thatso I
 think this is a great solution!
 
 Thanks,
 
 Michael
 
 
 On Wed, Nov 5, 2014 at 10:23 AM, Chris Mattmann
 chris.mattm...@gmail.com
 wrote:
 
  Hi Mike,
 
  Thanks for flushing this out.
 
  My thoughts on the below:
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 12:12 PM
  To: dev@oodt.apache.org
  Subject: Re: Extra Compiler Tools
 
  I tried this approach. The plugin requires a path to the protoc tool
 and
  thus a working installation.  This is what prompted the discussion.
 
  Ah - no worries, what you could do is:
 
  1. only enable to plugin if -Pwith-mesos is enabled; and
 
  
  Running the plugin under a profile works.
 
  Yep.
 
   However, not running the plugin
  causes compile errors in dependant code.  Excluding this code except
  within
  the profile doesn't seem to work, and is considered by some to be bad
 form
  because there is nothing inside the jar file that notes which profiles
  were
  used to compile.
 
  Got it. Suggestion here would be:
 
  2. create a new module, cas-resource-mesos, and inside of that module,
  take one of the following approaches, assuming the module is activated
  when -Pwith-mesos is enabled:
 
  2a. Maven Antrun like so (in this old example):
 
 
 http://stackoverflow.com/questions/1578456/integrate-protocol-buffers-in
 t
 o-
  maven2-build
 
  (pro: more flexibility in case protoc isn¹t there; to fail on error; to
  only compile if
  protoc is available
 
  2b. Maven protobuf plugin
  http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html
 
  Here¹s how to enable a module with a profile:
 
 
 
 http://blog.soebes.de/blog/2013/11/09/why-is-it-bad-to-activate-slash-de
 a
 ct
  ive-modules-by-profiles-in-maven/
 
 
  It seems like that is a bad idea though, based on that discussion.
 
  So, here¹s another option:
 
  1. Inside of cas-resource (no special new module or anything else)
  2. include some custom Ant magic via a build.xml file and the Maven
  AntRun plugin:
2a. test if protoc is on the system path, and if not, download it,
 e.g.,
  into the target directory (gets deleted on clean)
2b. call protoc and compile after 2a
 
  I would suggest this solution as I think it¹s the most robust and
 ensures
  we always have a cas-resource that includes mesos and compiled
 correctly.
 
  Cheers,
  Chris
 
  
  Any ideas on how to continue?
  
  Michael
   On Nov 5, 2014 11:04 AM, Chris Mattmann chris.mattm...@gmail.com
  wrote:
  
   Hi Mike,
  
   Great discussion. It would be nice if there was
   a protoc Maven plugin:
  
   http://sergei-ivanov.github.io/maven-protoc-plugin/usage.html
  
  
   Looks like there is. My suggestion:
  
   1. use a Profile, something like -Pwith-mesos and
   then when activated;
   2. call the above plugin if -Pwith-mesos is activated
   in the resource manager
  
   Sound good

Re: Extra Compiler Tools

2014-11-05 Thread Michael Starch
According to the specs, any subclass that implements Serializable must
manually implement the serialization of the parents' members.  I tested
this and it fails exactly as expected.  The parent's members aren't
serialized.

Also, JobInput is an interface so I would have no way of catching all
of the possible implementations that could come at me.

Michael
On Nov 5, 2014 7:06 PM, Mattmann, Chris A (3980) 
chris.a.mattm...@jpl.nasa.gov wrote:

 Got it, Mike.

 Hmm, how about simply creating SerializableJobSpec and
 SerializableJob and SerializableJobInput and then making
 them sub-class their parents and implement Serializable.
 Then, use these classes in your Mesos implementation.
 That seems self-contained, doesn’t change core classes,
 and pretty easy, right?

 Cheers,
 Chris


 ++
 Chris Mattmann, Ph.D.
 Chief Architect
 Instrument Software and Science Data Systems Section (398)
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 168-519, Mailstop: 168-527
 Email: chris.a.mattm...@nasa.gov
 WWW:  http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Associate Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++






 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org dev@oodt.apache.org
 Date: Wednesday, November 5, 2014 at 8:00 PM
 To: dev@oodt.apache.org dev@oodt.apache.org
 Subject: Re: Extra Compiler Tools

 I need to serialize a JobSpec and children (Job and JobInput) to a byte[].
 Java can do this automatically by marking all three as Serializable.
 
 The work around is to manually serialize to a private inner struct and
 back
 out again.  The inner class will have members for each member in the
 JobSpec and children.  Java can the auto-serialize that without changing
 the other three.
 
 It is ugly, and essentially a reimplementation of those three
 classesbut it is entirely self-contained.
 
 Michael
 On Nov 5, 2014 6:45 PM, Chris Mattmann chris.mattm...@gmail.com
 wrote:
 
  Hey Mike,
 
  Hmm, what’s the work around just so I know
  what we’re trading against?
 
  Cheers,
  Chris
 
  
  Chris Mattmann
  chris.mattm...@gmail.com
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, November 5, 2014 at 6:31 PM
  To: dev@oodt.apache.org
  Subject: Re: Extra Compiler Tools
 
  That is basically what I did. Regardless, protobuff proves to be
 overkill.
  
  If I mark those classes as serializable, the correct solution is 2
 lines
  of
  code.  (protobuff was like 20).  Wrote a test case, and it works
  perfectly.
  
  If I cannot make JobSpec Job and JonInput implement Serializable then
 the
  work around is simple too.
  
  What do you think?  Should I mark them as Serializable, or use a
  work-around.  Either is a better solution than protobuff.
  
  Michael
  On Nov 5, 2014 4:44 PM, Chris Mattmann chris.mattm...@gmail.com
  wrote:
  
   Mike, have you looked at this yet?
  
  
  
 
 
 http://techtraits.com/build%20management/maven/2011/09/09/compiling-proto
  co
   l-buffers-from-maven/
  
  
   I’m going to play with it tonight and see if
   I can help here. Do you have some files I can test
   with? Can you attach them to JIRA or dropbox them to me
   so I can scope?
  
   Cheers,
   Chris
  
   
   Chris Mattmann
   chris.mattm...@gmail.com
  
  
  
  
   -Original Message-
   From: Michael Starch starc...@umich.edu
   Reply-To: dev@oodt.apache.org
   Date: Wednesday, November 5, 2014 at 5:37 PM
   To: dev@oodt.apache.org
   Subject: Re: Extra Compiler Tools
  
   Oktime for an audible.  Protoc needs to be built from source, no
   binary
   distributions available.  Thus I am going to purge proto-buffers
 from
  the
   new code and be done with it.
   
   Any problem making the following classes/interfaces implement
   java.io.Serializable:
   
   JobSpec
   Job
   JobInput
   
   Doing so would allow apache and native java serialization and thus
 we
   wouldn't need something like proto-buffers.
   
   -Michael
   Thanks Mike +1
   
   
   Chris Mattmann
   chris.mattm...@gmail.com
   
   
   
   
   -Original Message-
   From: Michael Starch starc...@umich.edu
   Reply-To: dev@oodt.apache.org
   Date: Wednesday, November 5, 2014 at 12:31 PM
   To: dev@oodt.apache.org
   Subject: Re: Extra Compiler Tools
   
   Looks like you followed the same reasoning chain that I did.  Yes,
 I
  came
   to the same conclusion that ant-build was best.
   
   I wasn't sure how to download protoc, but you just answered
  thatso I
   think this is a great solution!
   
   Thanks,
   
   Michael
   
   
   On Wed, Nov 5, 2014 at 10:23 AM

Re: Review Request 22791: Streaming OODT Changes

2014-11-01 Thread Michael Starch


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/core/pom.xml, line 274
  https://reviews.apache.org/r/22791/diff/2/?file=659641#file659641line274
 
  Custom Maven repos are difficult since the Central repository is 
  phasing them out:
  
  
  http://blog.sonatype.com/2010/03/why-external-repos-are-being-phased-out-of-central/
  
  If we absolutely need to ref this repo, can you make it a Maven 
  profile, not enabled by default, so that users won't have the issue for 
  this when downloading and building OODT?

These jars exist in the standard repo, so finding them there.


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/ResourceMesosFrameworkFactory.java,
   line 1
  https://reviews.apache.org/r/22791/diff/2/?file=659658#file659658line1
 
  All of these files need ALv2 license headers.

Added ALv2 headers.


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22791/#review52324
---


On Oct. 24, 2014, 10:35 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/22791/
 ---
 
 (Updated Oct. 24, 2014, 10:35 p.m.)
 
 
 Review request for oodt, Lewis McGibbney and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This patch contains all the changes needed to add in streaming oodt into 
 the oodt svn repository.
 
 There are four main portions:
-Mesos Framework for Resource Manager (Prototype working)
-Spark Runner for Workflow Manager (Prototype working)
-Filemanager streaming type (In development)
-Deployment and cluster management scripts (In development)
 
 Where can this stuff be put so that it is available to use, even while it is 
 in development?
 
 
 Note: Filemanager work (and corrections here-in) have been moved to sub-patch 
 review: https://reviews.apache.org/r/27172
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/shutdown.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-master.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-slave.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/resource.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/utilites.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/env-vars.sh.tmpl
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/hosts 
 PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/install.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/required-software.txt
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/core/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/cli/action/IngestProductCliAction.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/datatransfer/LocalDataTransferer.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/examples/MimeTypeExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Reference.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/system/XmlRpcFileManager.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/BasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/DateTimeVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/SingleFileBasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/VersioningUtils.java
  1617800 
   http://svn.apache.org/repos/asf/oodt

Re: Could not mvn install OODT from the trunk

2014-10-29 Thread Michael Starch
MengYing,

I cannot duplicate it on Mac OS X 10.9.5, Java 7 nor CentOS with Java 7.

Is the error repeatable on your end?

Michael
On Oct 29, 2014 4:01 PM, MengYing Wang mengyingwa...@gmail.com wrote:

 Hi Michael,

 $ java -version
 java version 1.7.0_51
 Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
 Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode).

 Thanks.

 Best,
 Mengying Wang

 On Wed, Oct 29, 2014 at 6:52 AM, Michael Starch starc...@umich.edu
 wrote:

  Mengying,
 
  I just tested this on Mac 1.9.5 using Java 7 and had no issues.Can
 you
  tell me your java version?  In the meantime I will test with Java 6 as I
  suspect that this might be an issue.
 
  --Michael
  Hi Everyone,
 
  This is my command sequence and some log:
 
  $svn co http://svn.apache.org/repos/asf/oodt/trunk/ oodt_trunk
 
  $cd oodt_trunk/
 
  $mvn clean install
 
  ..
 
  ---
 
   T E S T S
 
  ---
 
  Running org.apache.oodt.cas.resource.mux.TestQueueMuxMonitor
 
  Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.277 sec
   FAILURE!
 
  Running org.apache.oodt.cas.resource.queuerepo.TestXmlQueueRepository
 
  Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec
 
  Running org.apache.oodt.cas.resource.monitor.TestGangliaResourceMonitor
 
  Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.41 sec
 
  Running org.apache.oodt.cas.resource.cli.TestResourceCli
 
  Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.848
 sec
 
  Running org.apache.oodt.cas.resource.jobqueue.TestJobStack
 
  Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.063 sec
 
  Running org.apache.oodt.cas.resource.system.TestXmlRpcResourceManager
 
  Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.346 sec
 
  Running org.apache.oodt.cas.resource.monitor.TestGangliaXMLParser
 
  Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec
 
  Running org.apache.oodt.cas.resource.mux.TestQueueMuxBatchmgr
 
  Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.057 sec
 
  Running org.apache.oodt.cas.resource.monitor.TestAssignmentMonitor
 
  Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.248 sec
 
  Running org.apache.oodt.cas.resource.util.TestUlimit
 
  Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec
 
 
  Results :
 
 
  Failed tests:
 
 
  Tests run: 48, Failures: 1, Errors: 0, Skipped: 0
 
 
  [INFO]
  
 
  [ERROR] BUILD FAILURE
 
  [INFO]
  
 
  [INFO] There are test failures.
 
  Please refer to
  /Users/AngelaWang/Downloads/oodt_trunk/resource/target/surefire-reports
 for
  the individual test results.
  ..
 
  Attached is the detailed test result for the failed TestQueueMuxMonitor
  test. I am using Mac OS X 10.9.2. Thank you for your help!
 
  --
  Best,
  Mengying (Angela) Wang
 



 --
 Best,
 Mengying (Angela) Wang



Re: Review Request 22791: Streaming OODT Changes

2014-10-25 Thread Michael Starch
/MesosFrameworkException.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/proto/ResourceProto.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitor.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/monitor/MesosMonitorFactory.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/Scheduler.java
 1617800 
  http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/proto/resc.proto 
PRE-CREATION 
  http://svn.apache.org/repos/asf/oodt/trunk/streamer/pom.xml PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/assembly/assembly.xml
 PRE-CREATION 
  http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/bin/streamer 
PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/publisher/KafkaPublisher.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/publisher/Publisher.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/InputStreamReader.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/Reader.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/StreamEmptyException.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/streams/MultiFileSequentialInputStream.java.bak
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/streams/MultiFileSequentialInputStreamArcheaic.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/system/MultiSourceStreamer.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/cmd-line-actions.xml
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/cmd-line-options.xml
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/logging.properties
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/streamer.properties
 PRE-CREATION 

Diff: https://reviews.apache.org/r/22791/diff/


Testing
---

Basic functionality tests done for both the resource-manger and workflow 
manager pieces.  Filemanager have been tested to properly ingest a 
GenericStream type with the lucene catalog only.


Thanks,

Michael Starch



Review Request 27172: Filemanager Changes for Streaming OODT Changes

2014-10-25 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27172/
---

Review request for oodt and Chris Mattmann.


Repository: oodt


Description
---

A sub-patch habdling the filemanager components of 
https://reviews.apache.org/r/22791/

These changes will not affect other users with the small exception that there 
is now a guard clause in setProductStructure.  Thus if a user specifies an 
invalid product structure, they will get an error.  Anything that is broken by 
this is effectively a bug.  Three tests have been updated to remove bugs 
descovered by this guard-clause.

All other changes add functionality for the new STREAM structure.  All 
recommendations from: https://reviews.apache.org/r/22791/ have been fixed.

These changes are a preparation step for SOODT.


Diffs
-

  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/cli/action/IngestProductCliAction.java
 1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/datatransfer/LocalDataTransferer.java
 1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java
 1634141 
  trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java 
1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Reference.java 
1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/system/XmlRpcFileManager.java
 1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/BasicVersioner.java
 1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/DateTimeVersioner.java
 1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/SingleFileBasicVersioner.java
 1634141 
  
trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/VersioningUtils.java
 1634141 
  
trunk/webapp/fmprod/src/test/java/org/apache/oodt/cas/product/jaxrs/resources/ProductResourceTest.java
 1634141 
  
trunk/webapp/fmprod/src/test/java/org/apache/oodt/cas/product/jaxrs/resources/TransferResourceTest.java
 1634141 
  
trunk/webapp/fmprod/src/test/java/org/apache/oodt/cas/product/jaxrs/resources/TransfersResourceTest.java
 1634141 

Diff: https://reviews.apache.org/r/27172/diff/


Testing
---

Build and ran unit tests these pass.  Testing was done to check the new STREAM 
type, it works.  


Thanks,

Michael Starch



Re: Review Request 22791: Streaming OODT Changes

2014-10-25 Thread Michael Starch


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java,
   line 79
  https://reviews.apache.org/r/22791/diff/2/?file=659644#file659644line79
 
  how about instead of NA, for the filename, we call it a stream-UUID 
  where we generate a unique stream UUID as the FILENAME field.

Shifted this work to sub-patch: https://reviews.apache.org/r/27172


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22791/#review52324
---


On Sept. 4, 2014, 5:23 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/22791/
 ---
 
 (Updated Sept. 4, 2014, 5:23 p.m.)
 
 
 Review request for oodt, Lewis McGibbney and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This patch contains all the changes needed to add in streaming oodt into 
 the oodt svn repository.
 
 There are four main portions:
-Mesos Framework for Resource Manager (Prototype working)
-Spark Runner for Workflow Manager (Prototype working)
-Filemanager streaming type (In development)
-Deployment and cluster management scripts (In development)
 
 Where can this stuff be put so that it is available to use, even while it is 
 in development?
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/shutdown.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-master.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-slave.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/resource.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/utilites.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/env-vars.sh.tmpl
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/hosts 
 PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/install.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/required-software.txt
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/core/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/cli/action/IngestProductCliAction.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/datatransfer/LocalDataTransferer.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/examples/MimeTypeExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Reference.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/system/XmlRpcFileManager.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/BasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/DateTimeVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/SingleFileBasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/VersioningUtils.java
  1617800 
   http://svn.apache.org/repos/asf/oodt/trunk/resource/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManager.java
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManagerFactory.java
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/MesosUtilities.java
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/mesos/OODTExecutor.java
  PRE-CREATION 
   
 http://svn.apache.org/repos

Review Request 27179: Cluster Tools from Streaming OODT Changes

2014-10-25 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27179/
---

Review request for oodt and Chris Mattmann.


Repository: oodt


Description
---

Cluster tools from: https://reviews.apache.org/r/22791/

Apache headers added and all ready to go.  These scripts are added to aid in 
running an apache cluster.


Diffs
-

  trunk/cluster-tools/scripts/shutdown.sh PRE-CREATION 
  trunk/cluster-tools/scripts/start-up.sh PRE-CREATION 
  trunk/cluster-tools/scripts/start-up/mesos-master.bash PRE-CREATION 
  trunk/cluster-tools/scripts/start-up/mesos-slave.bash PRE-CREATION 
  trunk/cluster-tools/scripts/start-up/resource.bash PRE-CREATION 
  trunk/cluster-tools/scripts/utilites.sh PRE-CREATION 
  trunk/cluster-tools/setup/env-vars.sh.tmpl PRE-CREATION 
  trunk/cluster-tools/setup/hosts PRE-CREATION 
  trunk/cluster-tools/setup/install.sh PRE-CREATION 

Diff: https://reviews.apache.org/r/27179/diff/


Testing
---

Scripts function, and do not interfere with anything else.


Thanks,

Michael Starch



Re: Review Request 22791: Streaming OODT Changes

2014-10-25 Thread Michael Starch


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java,
   line 82
  https://reviews.apache.org/r/22791/diff/2/?file=659644#file659644line82
 
  how about a similarly derived STREAM loc here, perhaps the same as the 
  FILENAME suggestion using UUID?

See: https://reviews.apache.org/r/27172


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java,
   line 30
  https://reviews.apache.org/r/22791/diff/2/?file=659646#file659646line30
 
  not sure what this change is?

Added guard clause for ProductStructure, so users cannot end up with an unknown 
structure.  All tests that are in violation of this have been fixed. See: 
https://reviews.apache.org/r/27172


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/examples/MimeTypeExtractor.java,
   line 76
  https://reviews.apache.org/r/22791/diff/2/?file=659645#file659645line76
 
  rather than silently do something, maybe take this out until there is 
  something to do?

Done see: https://reviews.apache.org/r/27172


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22791/#review52324
---


On Oct. 24, 2014, 10:35 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/22791/
 ---
 
 (Updated Oct. 24, 2014, 10:35 p.m.)
 
 
 Review request for oodt, Lewis McGibbney and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This patch contains all the changes needed to add in streaming oodt into 
 the oodt svn repository.
 
 There are four main portions:
-Mesos Framework for Resource Manager (Prototype working)
-Spark Runner for Workflow Manager (Prototype working)
-Filemanager streaming type (In development)
-Deployment and cluster management scripts (In development)
 
 Where can this stuff be put so that it is available to use, even while it is 
 in development?
 
 
 Note: Filemanager work (and corrections here-in) have been moved to sub-patch 
 review: https://reviews.apache.org/r/27172
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/shutdown.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-master.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-slave.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/resource.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/utilites.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/env-vars.sh.tmpl
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/hosts 
 PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/install.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/required-software.txt
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/core/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/cli/action/IngestProductCliAction.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/datatransfer/LocalDataTransferer.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/examples/MimeTypeExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Reference.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/system/XmlRpcFileManager.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/BasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning

Review Request 27162: Multiplexing Resource Maanger Backend

2014-10-24 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27162/
---

Review request for oodt and Chris Mattmann.


Repository: oodt


Description
---

Changes to add a Multiplexing Resource Manager Backend. Please see: 
https://issues.apache.org/jira/browse/OODT-764


Diffs
-

  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/BackendManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/BackendRepository.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/BackendRepositoryFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxBatchManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxBatchManagerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxMonitor.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxScheduler.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxSchedulerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/StandardBackendManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/XmlBackendRepository.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/XmlBackendRepositoryFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/exceptions/RepositoryException.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/util/GenericResourceManagerObjectFactory.java
 1634124 
  trunk/resource/src/main/resources/examples/queue-to-backend-mapping.xml 
PRE-CREATION 
  trunk/resource/src/main/resources/resource.properties 1634124 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/TestQueueMuxBatchmgr.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/TestQueueMuxMonitor.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/mocks/MockBatchManager.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/mocks/MockMonitor.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27162/diff/


Testing
---

Unit tests written.  Basic functionality written.


Thanks,

Michael Starch



Re: Review Request 27162: Multiplexing Resource Manger Backend

2014-10-24 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27162/
---

(Updated Oct. 24, 2014, 8:59 p.m.)


Review request for oodt and Chris Mattmann.


Summary (updated)
-

Multiplexing Resource Manger Backend


Repository: oodt


Description
---

Changes to add a Multiplexing Resource Manager Backend. Please see: 
https://issues.apache.org/jira/browse/OODT-764


Diffs
-

  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/BackendManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/BackendRepository.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/BackendRepositoryFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxBatchManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxBatchManagerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxMonitor.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxScheduler.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/QueueMuxSchedulerFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/StandardBackendManager.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/XmlBackendRepository.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/mux/XmlBackendRepositoryFactory.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/structs/exceptions/RepositoryException.java
 PRE-CREATION 
  
trunk/resource/src/main/java/org/apache/oodt/cas/resource/util/GenericResourceManagerObjectFactory.java
 1634124 
  trunk/resource/src/main/resources/examples/queue-to-backend-mapping.xml 
PRE-CREATION 
  trunk/resource/src/main/resources/resource.properties 1634124 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/TestQueueMuxBatchmgr.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/TestQueueMuxMonitor.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/mocks/MockBatchManager.java
 PRE-CREATION 
  
trunk/resource/src/test/org/apache/oodt/cas/resource/mux/mocks/MockMonitor.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/27162/diff/


Testing
---

Unit tests written.  Basic functionality written.


Thanks,

Michael Starch



Re: Review Request 22791: Streaming OODT Changes

2014-09-10 Thread Michael Starch


 On Sept. 4, 2014, 5:37 p.m., Chris Mattmann wrote:
  http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/shutdown.sh,
   line 1
  https://reviews.apache.org/r/22791/diff/2/?file=659631#file659631line1
 
  Mike, instead of directly including cluster-tools in oodt, can you 
  simply make:
  
  # A Maven AntRun script (build.xml) or something that downloads the 
  cluster-tools as part of the Resource Manager build? We shouldn't have to 
  maintain these scripts in OODT.
 
 Michael Starch wrote:
 Download from where? Do we have a place to upload unmaintained OPS 
 scripts?
 
 Chris Mattmann wrote:
 THanks Mike. Ask @paulramirez about how to use Maven AntRun or 
 assembly.xml - you could theoretically reference Mesos if it has a 
 distribution for its scripts in the Central repo - you could ref the dist 
 dependency as a dependency and then unpack them dynamically into a directory. 
 If it doesn't have a Maven dist assembly for the Mesos scripts, then my 
 recommendation would simply be to:
 
 1. Create a build.xml that downloads (e.g., from Mesos trunk or a tag in 
 Apache git) those scripts into whatever OODT build directory you want (inside 
 of Resource Manager probably makes the most sense)
 2. Use the Maven Ant-Run plugin to call that build.xml in resource/pom.xml
 3. Consider using a Maven profile (e.g., mvn -Pstreaming) to insulate 
 your streaming profile cluster-tools download

To be clear...these are scripts I created to setup and run our cluster for use 
under OODT.  They currently just handle the supporting Mesos components, but 
eventually will startup oodt in a streaming-enabled mode as well. The mesos 
project knows nothing about them.  If we do not want them maintained by OODT 
then I can remove them from the checkin, however; in my opinion the inculsion 
of these scripts goes towards making a more ops-friendly OODT package.  This is 
one of the current goals OODT is working toward.

Am I missing something here?


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22791/#review52324
---


On Sept. 4, 2014, 5:23 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/22791/
 ---
 
 (Updated Sept. 4, 2014, 5:23 p.m.)
 
 
 Review request for oodt, Lewis McGibbney and Chris Mattmann.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This patch contains all the changes needed to add in streaming oodt into 
 the oodt svn repository.
 
 There are four main portions:
-Mesos Framework for Resource Manager (Prototype working)
-Spark Runner for Workflow Manager (Prototype working)
-Filemanager streaming type (In development)
-Deployment and cluster management scripts (In development)
 
 Where can this stuff be put so that it is available to use, even while it is 
 in development?
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/shutdown.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-master.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-slave.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/resource.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/utilites.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/env-vars.sh.tmpl
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/hosts 
 PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/install.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/required-software.txt
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/core/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/cli/action/IngestProductCliAction.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/datatransfer/LocalDataTransferer.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/examples/MimeTypeExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java
  1617800 
   
 http

Re: Review Request 22791: Streaming OODT Changes

2014-08-14 Thread Michael Starch


 On Aug. 14, 2014, 8:43 p.m., Sean Kelly wrote:
  Do the supporting scripts have to be BASH-specific? We'd show better 
  compatibility with a wider range of Unix systems if we could stick with 
  pure /bin/sh.

Currently the scripts use BASH specific features.  These scripts are not 
required to run the new components, they only simplify the process of starting 
up things and getting a basic environment running.  They fall into the category 
of operations scripts that help the user get an out-of-the-box version of the 
code running.  There are new desires to create operations/basic usage scripts 
for OODT to help OODT run out-of-the-box in a preconfigured way.  Do we have a 
standard for these type of scripts?

If compatibility of most systemes is desired then there are a number of other 
questionable things in these scripts...like the usage of screen.


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22791/#review50634
---


On Aug. 13, 2014, 10:56 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/22791/
 ---
 
 (Updated Aug. 13, 2014, 10:56 p.m.)
 
 
 Review request for oodt.
 
 
 Repository: oodt
 
 
 Description
 ---
 
 This patch contains all the changes needed to add in streaming oodt into 
 the oodt svn repository.
 
 There are four main portions:
-Mesos Framework for Resource Manager (Prototype working)
-Spark Runner for Workflow Manager (Prototype working)
-Filemanager streaming type (In development)
-Deployment and cluster management scripts (In development)
 
 Where can this stuff be put so that it is available to use, even while it is 
 in development?
 
 
 Diffs
 -
 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/shutdown.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-master.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/mesos-slave.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/start-up/resource.bash
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/scripts/utilites.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/env-vars.sh.tmpl
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/hosts 
 PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/install.sh 
 PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/cluster-tools/setup/required-software.txt
  PRE-CREATION 
   http://svn.apache.org/repos/asf/oodt/trunk/core/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/cli/action/IngestProductCliAction.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/datatransfer/LocalDataTransferer.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/CoreMetExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/metadata/extractors/examples/MimeTypeExtractor.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Product.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/structs/Reference.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/system/XmlRpcFileManager.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/BasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/DateTimeVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/SingleFileBasicVersioner.java
  1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/filemgr/src/main/java/org/apache/oodt/cas/filemgr/versioning/VersioningUtils.java
  1617800 
   http://svn.apache.org/repos/asf/oodt/trunk/resource/pom.xml 1617800 
   
 http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr/MesosBatchManager.java
  PRE-CREATION 
   
 http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/batchmgr

Re: Review Request 22791: Streaming OODT Changes

2014-08-13 Thread Michael Starch
://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/java/org/apache/oodt/cas/resource/scheduler/Scheduler.java
 1617800 
  http://svn.apache.org/repos/asf/oodt/trunk/resource/src/main/proto/resc.proto 
PRE-CREATION 
  http://svn.apache.org/repos/asf/oodt/trunk/streamer/pom.xml PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/assembly/assembly.xml
 PRE-CREATION 
  http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/bin/streamer 
PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/publisher/KafkaPublisher.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/publisher/Publisher.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/InputStreamReader.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/Reader.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/reader/StreamEmptyException.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/streams/MultiFileSequentialInputStream.java.bak
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/streams/MultiFileSequentialInputStreamArcheaic.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/java/org/apache/oodt/cas/streamer/system/MultiSourceStreamer.java
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/cmd-line-actions.xml
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/cmd-line-options.xml
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/logging.properties
 PRE-CREATION 
  
http://svn.apache.org/repos/asf/oodt/trunk/streamer/src/main/resources/streamer.properties
 PRE-CREATION 

Diff: https://reviews.apache.org/r/22791/diff/


Testing
---

Basic functionality tests done for both the resource-manger and workflow 
manager pieces.  Filemanager have been tested to properly ingest a 
GenericStream type with the lucene catalog only.


Thanks,

Michael Starch



Re: Multiple Processing Paradigms at Once

2014-08-06 Thread Michael Starch
Chris,

I believe I am working on the truck of OODT.  The WEngine branch already
does what I need it to, however; the absence of this functionality in trunk
led me to wonder if this approach is undesired...hence this question.

-Michael


On Wed, Aug 6, 2014 at 8:50 AM, Chris Mattmann chris.mattm...@gmail.com
wrote:

 Hi Mike,

 I think you are using the wengine branch of Apache OODT.
 That is unmaintained. I would sincerely urge you to get
 this working in trunk, that's where the developers are working
 right now.

 Cheers,
 Chris

 P.S. Let me think more about the below I have some ideas there.

 
 Chris Mattmann
 chris.mattm...@gmail.com




 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org
 Date: Wednesday, August 6, 2014 8:29 AM
 To: dev@oodt.apache.org
 Subject: Multiple Processing Paradigms at Once

 All,
 
 I am working on upgrading OODT to allow it to process streaming data,
 alongside traditional non-streaming jobs.  This means that some jobs need
 to be run by the resource manager, and other jobs need to be submitted to
 the stream-processing.  Therefore, processing needs to be forked or
 multiplexed at some point in the life-cycle.
 
 There are two places where this can be done: workflow manager runners, and
 the resource manager.  Currently, I am  working on building workflow
 runners, and doing the job-multiplexing there because this cuts out one
 superfluous step for the streaming jobs (namely going to the resource
 manager before being routed).
 
 Are there any comments on this approach or does this approach make sense?
 
 -Michael Starch





Re: Multiple Processing Paradigms at Once

2014-08-06 Thread Michael Starch
Chris,

I did find the Runners in the trunk, and used them for the prototype piece
to submit to Spark.  Specifically, 
http://svn.apache.org/repos/asf/oodt/branches/wengine-branch/wengine/src/main/java/org/apache/oodt/cas/workflow/engine/runner/MappedMultiRunner.java
has yet to be ported over to the trunk, and I am wondering if there is a
better solution than to using a Runner to submit to multiple Runners.


-Michael


On Wed, Aug 6, 2014 at 9:08 AM, Mattmann, Chris A (3980) 
chris.a.mattm...@jpl.nasa.gov wrote:

 See OODT-215 and OODT-491 the functionality is not absent it is present
 and those issues are the  current status

 Sent from my iPhone

  On Aug 6, 2014, at 8:53 AM, Michael Starch starc...@umich.edu wrote:
 
  Chris,
 
  I believe I am working on the truck of OODT.  The WEngine branch already
  does what I need it to, however; the absence of this functionality in
 trunk
  led me to wonder if this approach is undesired...hence this question.
 
  -Michael
 
 
  On Wed, Aug 6, 2014 at 8:50 AM, Chris Mattmann chris.mattm...@gmail.com
 
  wrote:
 
  Hi Mike,
 
  I think you are using the wengine branch of Apache OODT.
  That is unmaintained. I would sincerely urge you to get
  this working in trunk, that's where the developers are working
  right now.
 
  Cheers,
  Chris
 
  P.S. Let me think more about the below I have some ideas there.
 
  
  Chris Mattmann
  chris.mattm...@gmail.com
 
 
 
 
  -Original Message-
  From: Michael Starch starc...@umich.edu
  Reply-To: dev@oodt.apache.org
  Date: Wednesday, August 6, 2014 8:29 AM
  To: dev@oodt.apache.org
  Subject: Multiple Processing Paradigms at Once
 
  All,
 
  I am working on upgrading OODT to allow it to process streaming data,
  alongside traditional non-streaming jobs.  This means that some jobs
 need
  to be run by the resource manager, and other jobs need to be submitted
 to
  the stream-processing.  Therefore, processing needs to be forked or
  multiplexed at some point in the life-cycle.
 
  There are two places where this can be done: workflow manager runners,
 and
  the resource manager.  Currently, I am  working on building workflow
  runners, and doing the job-multiplexing there because this cuts out one
  superfluous step for the streaming jobs (namely going to the resource
  manager before being routed).
 
  Are there any comments on this approach or does this approach make
 sense?
 
  -Michael Starch
 
 
 



Re: Multiple Processing Paradigms at Once

2014-08-06 Thread Michael Starch
Chris,

This makes sense to me.  Do you see any problems with a BatchMgr
implementation that submits to children BatchMgr implementations?  This
would allow for users to mux to any BatchMgrs that they choose.

-Michael





On Wed, Aug 6, 2014 at 9:35 AM, Mattmann, Chris A (3980) 
chris.a.mattm...@jpl.nasa.gov wrote:

 Gotcha, thanks Mike, great job and glad you are using the trunk.
 I think it may make sense to do this as a BatchMgr implementation
 rather than a Runner.

 The big reason is that the whole intent of obfuscating the where/how
 something is run is architecturally supposed to be the job of the
 Resource Manager. In Wengine this was blurred a bit, and the Runner
 framework is great that was developed there, but I'd like to get to
 pushing this functionality into the Resource Manager. IOW, to me
 it makes sense to update the Resource Manager to talk to Mesos, not
 the Workflow Manager (which shouldn't care). Does that make sense?

 Cheers,
 Chris


 ++
 Chris Mattmann, Ph.D.
 Chief Architect
 Instrument Software and Science Data Systems Section (398)
 NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
 Office: 168-519, Mailstop: 168-527
 Email: chris.a.mattm...@nasa.gov
 WWW:  http://sunset.usc.edu/~mattmann/
 ++
 Adjunct Associate Professor, Computer Science Department
 University of Southern California, Los Angeles, CA 90089 USA
 ++






 -Original Message-
 From: Michael Starch starc...@umich.edu
 Reply-To: dev@oodt.apache.org dev@oodt.apache.org
 Date: Wednesday, August 6, 2014 9:22 AM
 To: dev@oodt.apache.org dev@oodt.apache.org
 Subject: Re: Multiple Processing Paradigms at Once

 Chris,
 
 I did find the Runners in the trunk, and used them for the prototype piece
 to submit to Spark.  Specifically, 
 
 http://svn.apache.org/repos/asf/oodt/branches/wengine-branch/wengine/src/m
 ain/java/org/apache/oodt/cas/workflow/engine/runner/MappedMultiRunner.java
 has yet to be ported over to the trunk, and I am wondering if there is a
 better solution than to using a Runner to submit to multiple Runners.
 
 
 -Michael
 
 
 On Wed, Aug 6, 2014 at 9:08 AM, Mattmann, Chris A (3980) 
 chris.a.mattm...@jpl.nasa.gov wrote:
 
  See OODT-215 and OODT-491 the functionality is not absent it is present
  and those issues are the  current status
 
  Sent from my iPhone
 
   On Aug 6, 2014, at 8:53 AM, Michael Starch starc...@umich.edu
 wrote:
  
   Chris,
  
   I believe I am working on the truck of OODT.  The WEngine branch
 already
   does what I need it to, however; the absence of this functionality in
  trunk
   led me to wonder if this approach is undesired...hence this question.
  
   -Michael
  
  
   On Wed, Aug 6, 2014 at 8:50 AM, Chris Mattmann
 chris.mattm...@gmail.com
  
   wrote:
  
   Hi Mike,
  
   I think you are using the wengine branch of Apache OODT.
   That is unmaintained. I would sincerely urge you to get
   this working in trunk, that's where the developers are working
   right now.
  
   Cheers,
   Chris
  
   P.S. Let me think more about the below I have some ideas there.
  
   
   Chris Mattmann
   chris.mattm...@gmail.com
  
  
  
  
   -Original Message-
   From: Michael Starch starc...@umich.edu
   Reply-To: dev@oodt.apache.org
   Date: Wednesday, August 6, 2014 8:29 AM
   To: dev@oodt.apache.org
   Subject: Multiple Processing Paradigms at Once
  
   All,
  
   I am working on upgrading OODT to allow it to process streaming
 data,
   alongside traditional non-streaming jobs.  This means that some jobs
  need
   to be run by the resource manager, and other jobs need to be
 submitted
  to
   the stream-processing.  Therefore, processing needs to be forked or
   multiplexed at some point in the life-cycle.
  
   There are two places where this can be done: workflow manager
 runners,
  and
   the resource manager.  Currently, I am  working on building workflow
   runners, and doing the job-multiplexing there because this cuts out
 one
   superfluous step for the streaming jobs (namely going to the
 resource
   manager before being routed).
  
   Are there any comments on this approach or does this approach make
  sense?
  
   -Michael Starch
  
  
  
 




Re: [ANNOUNCE] Tyler Palsulich is now on the PMC too!

2014-07-24 Thread Michael Starch
Great!  Welcome, Tyler.

As another new PMC I aggree that we should discuss a roadmap.

-Michael
On Jul 22, 2014 11:58 AM, Michael Joyce jo...@apache.org wrote:

 Yay! Welcome Tyler!!!


 -- Joyce


 On Tue, Jul 22, 2014 at 11:36 AM, Tom Barber tom.bar...@meteorite.bi
 wrote:

  Aye I'll +1 that, and move this over to dev@

 We have an ever expanding PMC, so lets do some Project Managing! ;)

 I think we need to come up with some roadmap goals, both regarding
 features, and also the existing code base, build systems and delivery.

 Personally I think it would be benificial to get a bunch of us on the
 phone and discuss it in real time (I'm GMT based which does make it a
 little tricker) rather than hash it out over a 6 month chain of emails.
 Like I said the other day, I don't really know how other projects manage
 this type of thing so those with more experience may want to chime in here.

 Regards

 Tom



 On 22/07/14 18:33, Lewis John Mcgibbney wrote:

   Hi Tyler,
  Please see Tom's thread from before the weekend.
  I say we have a full on discussion on establishing a roadmap.
  For us to establish and agree on this will be extremely beneficial for
 the project.
  Lewis


 On Tue, Jul 22, 2014 at 10:29 AM, Tyler Palsulich tpalsul...@gmail.com
 wrote:

 Hi All,

  Thanks for inviting me to join as a committer and PMC member! I'm
 happy to be part of the project. Any tips on where to dive in?

  Tyler






Re: Review Request: Suggested Fix for JIRA Issue OODT-553

2013-02-01 Thread Michael Starch


 On Jan. 30, 2013, 2:09 a.m., Chris Mattmann wrote:
  /trunk/commons/src/main/java/org/apache/oodt/commons/exec/EnvUtilities.java,
   line 77
  https://reviews.apache.org/r/9142/diff/1/?file=253029#file253029line77
 
  Hey Mike, here we are no longer calling preProcessInputStream -- 
  doesn't that do envVarReplace?

Chris,  the only thing that this function does is line = 
line.replaceAll(, );.  There is no envVarReplace here.  I did 
some quick tests to see if this was an issue, but I will test this again to see 
if I can get a better idea of any implications here.

Are you experiencing any specific problems? 


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9142/#review15817
---


On Jan. 29, 2013, 9:25 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/9142/
 ---
 
 (Updated Jan. 29, 2013, 9:25 p.m.)
 
 
 Review request for oodt and Chris Mattmann.
 
 
 Description
 ---
 
 This is a review for a patch for JIRA issue OODT-553 
 (https://issues.apache.org/jira/browse/OODT-553).  It fixes the EnvUtilities. 
 to use System.getEnvironment and not exec env.
 
 
 Diffs
 -
 
   /trunk/commons/src/main/java/org/apache/oodt/commons/exec/EnvUtilities.java 
 1439711 
   /trunk/commons/src/test/org/apache/oodt/commons/exec/TestEnvUtilities.java 
 1439711 
 
 Diff: https://reviews.apache.org/r/9142/diff/
 
 
 Testing
 ---
 
 Wrote several new unit tests that test this on unix systems  only as it 
 requires USER and HOME env vars to be set.
 
 
 Thanks,
 
 Michael Starch
 




Re: Review Request: Suggested Fix for JIRA Issue OODT-553

2013-02-01 Thread Michael Starch


 On Jan. 30, 2013, 2:09 a.m., Chris Mattmann wrote:
 

The line line = line.replaceAll(, ); is used to allow the 
call Properties.load(InputStream).  The reason for this is the load command 
expects '\' to escape something, and will drop invalid escapes.  Therefore '\' 
must become \\ to be properly loaded without causing unwanted escapes or 
being ignored.

This is all handled by java on the back end when calling getEnvironment i.e. 
any env variable, complete with any '\'s is already properly read in.

I cannot get the new method to fail on any environment variable containing 
series of 1 to 7 '\'s against the old implementation in a regression test.  
Therefore, I expect the above implementation to function properly. 


- Michael


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9142/#review15817
---


On Jan. 29, 2013, 9:25 p.m., Michael Starch wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/9142/
 ---
 
 (Updated Jan. 29, 2013, 9:25 p.m.)
 
 
 Review request for oodt and Chris Mattmann.
 
 
 Description
 ---
 
 This is a review for a patch for JIRA issue OODT-553 
 (https://issues.apache.org/jira/browse/OODT-553).  It fixes the EnvUtilities. 
 to use System.getEnvironment and not exec env.
 
 
 Diffs
 -
 
   /trunk/commons/src/main/java/org/apache/oodt/commons/exec/EnvUtilities.java 
 1439711 
   /trunk/commons/src/test/org/apache/oodt/commons/exec/TestEnvUtilities.java 
 1439711 
 
 Diff: https://reviews.apache.org/r/9142/diff/
 
 
 Testing
 ---
 
 Wrote several new unit tests that test this on unix systems  only as it 
 requires USER and HOME env vars to be set.
 
 
 Thanks,
 
 Michael Starch
 




Review Request: Suggested Fix for JIRA Issue OODT-553

2013-01-29 Thread Michael Starch

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/9142/
---

Review request for oodt and Chris Mattmann.


Description
---

This is a review for a patch for JIRA issue OODT-553 
(https://issues.apache.org/jira/browse/OODT-553).  It fixes the EnvUtilities. 
to use System.getEnvironment and not exec env.


Diffs
-

  /trunk/commons/src/main/java/org/apache/oodt/commons/exec/EnvUtilities.java 
1439711 
  /trunk/commons/src/test/org/apache/oodt/commons/exec/TestEnvUtilities.java 
1439711 

Diff: https://reviews.apache.org/r/9142/diff/


Testing
---

Wrote several new unit tests that test this on unix systems  only as it 
requires USER and HOME env vars to be set.


Thanks,

Michael Starch