On Tue, Jun 7, 2016 at 11:20 PM, Luciano Resende
wrote:
> Looks like consensus on option 1 :
>
> OPTION 1 (https://github.com/ckadner/bahir_from_spark_8301fad) has the
> examples folder and src folder at module root level, i.e.:
>
> - streaming-akka
> - examples/src/main/...
> - src/m
Looks like consensus on option 1 :
OPTION 1 (https://github.com/ckadner/bahir_from_spark_8301fad) has the
examples folder and src folder at module root level, i.e.:
- streaming-akka
- examples/src/main/...
- src/main/...
- streaming-mqtt
- examples/src/main/...
- src/main
Good point. It's also an alternative.
Regards
JB
On 06/07/2016 04:24 PM, Luciano Resende wrote:
An alternative would be to have a git repository for each runtime (e.g a
new git repo for flink, etc)
On Tuesday, June 7, 2016, Jean-Baptiste Onofré wrote:
Thanks for the update Christian.
I wou
An alternative would be to have a git repository for each runtime (e.g a
new git repo for flink, etc)
On Tuesday, June 7, 2016, Jean-Baptiste Onofré wrote:
> Thanks for the update Christian.
>
> I would go for option 1 but with spark modules.
>
> As Bahir can contain extensions for different pro
Thanks for the update Christian.
I would go for option 1 but with spark modules.
As Bahir can contain extensions for different projects, we should add a
project top level module:
- spark
-- streaming-akka
-- streaming-mqtt
My $0.01
Regards
JB
On 06/07/2016 08:41 AM, Christi
Following Steve's second suggestion I created an initial Bahir repository
from a Spark snapshot (8301fad) and purged the unwanted code using git
filter-branch magic to preserve commit history for files like examples that
needed to be moved. Adding build configuration, license file, etc would
need
The shim approach seems pretty good. We use that in Spark itself too in
order to subclass or use semi-private APIs from other projects.
On Wed, Jun 1, 2016 at 4:30 PM, Mridul Muralidharan
wrote:
> Another alternative, which is what we do internally here, is to have a shim
> layer in o.a.s names
Another alternative, which is what we do internally here, is to have a shim
layer in o.a.s namespace : with everything else in our own namespace.
The shim will act not just as a means to private api, but also as an
abstraction (in theory replaced with different version, impl, etc).
Keeping rest of
On Wed, Jun 1, 2016 at 10:44 AM, Mridul Muralidharan wrote:
> That would also depend on whether it is possible to move out of o.a.s
> namespace - given the private[spark], etc restriction that scala
> imposes.
> I think Steve did allude to this in his mail.
I think that should be a (reasonably) s
Ok, let me create a subtask on importing the code to verify if there is any
restrictions based on the current code interactions with Spark apis.
On Wed, Jun 1, 2016 at 10:44 AM, Mridul Muralidharan
wrote:
> That would also depend on whether it is possible to move out of o.a.s
> namespace - given
how about org.apache.bahir?
++
Chris Mattmann, Ph.D.
Chief Architect
Instrument Software and Science Data Systems Section (398)
NASA Jet Propulsion Laboratory Pasadena, CA 91109 USA
Office: 168-519, Mailstop: 168-527
Email: chris.a.ma
That would also depend on whether it is possible to move out of o.a.s
namespace - given the private[spark], etc restriction that scala
imposes.
I think Steve did allude to this in his mail.
Assuming that is not an issue (or can be resolved by making
appropriate spi visibility changes in apache spa
Sorry for the misuse of the word "artifacts", I meant to be source, sample
source, and md for docs all under one extension umbrella.
Now, a point Steve brought up on the other thread is related to extensions,
are we going to continue to use the org.apache.spark package or should we
use something
(first post, hope the apache@org email gets in)
two opts spring to mind
-pure source, making a note of origin repo & commit
-snapshot spark repo, purge most and then move to folders you want
Now, what kind of code layout do we want, so as to encourage a contrib/
tree?
On 1 June 2016 at 07:27,
I agree with importing just the source
Regards
Mridul
On Tuesday, May 31, 2016, Mattmann, Chris A (3980) <
chris.a.mattm...@jpl.nasa.gov> wrote:
> I think source is fine by me for the first import - is there
> a specific reason to have all the artifacts?
>
>
Hi Luciano,
I would say that just source import is enough (with git you can preserve
history). The sources include the example for me.
Regards
JB
On 06/01/2016 02:42 AM, Luciano Resende wrote:
I was thinking about how to import the initial extensions from spark and
had some initial thoughts
I think source is fine by me for the first import - is there
a specific reason to have all the artifacts?
++
Chris Mattmann, Ph.D.
Chief Architect
Instrument Software and Science Data Systems Section (398)
NASA Jet Propulsion Laborat
I was thinking about how to import the initial extensions from spark and
had some initial thoughts :
- Import history as much as possible
- Create a folder structure where all artifacts for a given extension would
be in that folder, including source, example, doc, etc
Thoughts ?
Any suggestions
18 matches
Mail list logo