The "feature/GEODE-9" branch has just been created. In addition to the core
geode code it also has a "gemfire-spark-connector" sub-directory from the
recent geode code drop.

On Thu, Jul 2, 2015 at 5:56 PM, John Blum <jb...@pivotal.io> wrote:

> Personally, I would like to see Apache Geode become more modular, even down
> to the key low-level functional components, or features of Geode (such as
> Querying/Indexing, Persistence, Compression, Security,
> Management/Monitoring, Function Execution, even Membership, etc, etc). Of
> course, such fine-grained modularity at this point will be very difficult
> to achieve in the short-term given the unclear delineation of concerns in
> the code, but certainly high-level features such as the Spark Integration,
> along with other good examples, such as the eventual HTTP Session
> Management, Hibernate support, Memcached integration along with the
> eventual rollout of the Redis integration, or even our tooling (jVSD, Gfsh,
> etc, etc) are prime candidates to keep separate, with individual
> deliverables.
>
> These "other modules" should consume Geode artifacts and not be directly
> tied to the Geode "core" (codebase), thus making Geode more modular,
> extensible, configurable with different provider implementations
> (conforming to well-defined "SPIs") etc.
>
> Spring Data GemFire is one such example that "consumes" GemFire/Geode
> artifacts and evolves concurrently, but separately.  More add-ons/plugins
> should evolve the same way, and Geode should be the "core", umbrella
> project for all the satellite efforts, IMO.
>
> -John
>
>
> On Thu, Jul 2, 2015 at 5:39 PM, Anthony Baker <aba...@pivotal.io> wrote:
>
> > >
> > > We are wondering wether to have this as part of Geode repo or on
> separate
> > > public GitHub repo?
> > >
> >
> > I think the spark connector belongs in the geode community, which implies
> > the geode ASF repo.  I think we can address the other concerns
> technically.
> >
> > > General Question:
> > > Can a module under Geode repo be released independent of Geode Release.
> > > E.g.: Can we release the connector without tied to Geode release?
> >
> > This is an interesting question I don’t know the answer to.  However, I
> > think we can handle this by creating a geode release frequently enough to
> > satisfy our community.  For example, if there is a new spark version
> > available we can determine if there is value to the community in
> creating a
> > release (geode + spark connector) containing that support.  Another
> option
> > to explore is to create a looser coupling such that the spark connector
> can
> > work across multiple spark versions (I know this is possible with Hadoop,
> > not sure about Spark).
> >
> > >
> > > Any input/suggestions?
> > >
> > > Thanks,
> > > -Anil.
> >
> > Anthony
> >
> >
>
>
> --
> -John
> 503-504-8657
> john.blum10101 (skype)
>

Reply via email to