More important than easy to develop is easy to pick up and use.

Improving the new user experience is something that needs attention from
Geode.  How we develop and provide Spark integration needs to take this
into account.

Once we are able to provide official releases, how can a user know and make
sure they are getting the correct plug-in version, and have relatively up
to date support for latest Geode and Spark versions?

That to me is the requirement we should be designing for first in our
development process.

On Tue, Jul 7, 2015 at 10:47 AM, Roman Shaposhnik <ro...@shaposhnik.org>
wrote:

> On Tue, Jul 7, 2015 at 10:34 AM, Anilkumar Gingade <aging...@pivotal.io>
> wrote:
> > Agree...And thats the point...The connector code needs to catch up with
> > spark release train; if its part of Geode then the Geode releases needs
> to
> > happen as often as Spark release (along with other planned Geode
> release)...
>
> I don't think this is a realistic goal to have that many actively
> supported branches
> of Geode Spark connector.
>
> Look, I've been around Hadoop ecosystem for years. Nowhere the problem of
> integration with upstream is as present as in Hadoop ecosystem
> (everything depends
> on everything else and everything evolves like crazy). I haven't seen a
> single
> project in that ecosystem that would be able to support a blanket statement
> like the above. May be Geode has resources that guys depending on something
> like HBase simply don't have.
>
> Thanks,
> Roman.
>



-- 
Greg Chase

Director of Big Data Communities
http://www.pivotal.io/big-data

Pivotal Software
http://www.pivotal.io/

650-215-0477
@GregChase
Blog: http://geekmarketing.biz/

Reply via email to