I think this is generally a good idea for managing multiple target runtimes.

One question I have though: is it really necessary that we support so many
release branches and so many compile targets? What about the versions of
Hadoop underneath each of those versions of HBase? Are we committed to
running against ever version of HDFS that each of those HBase releases
support? The test load will be massive and the suite itself must be
rock-solid in order to run across so many permutations. I fear that’s an
unreasonable burden for an open source community.

Thanks,
Nick

On Sat, May 18, 2019 at 11:36 AM Thomas D'Silva
<tdsi...@salesforce.com.invalid> wrote:

> +1, I think this is a good idea. This would make it easier to contribute
> and commit since you would only have to create a single patch.
> The tests would take longer to run (1.3, 1.4, 1.5 and 2.x). We should make
> sure our precommit build will run the tests for all the modules.
>
>
>
> On Fri, May 17, 2019 at 11:23 AM la...@apache.org <la...@apache.org>
> wrote:
>
> > Hi all,
> > historically we have a branch of each version of HBase we want to
> > support.As a result we have many branches, committing is a hassle and it
> is
> > easy to miss a change across branches.
> > Instead we could have a maven module per version of HBase we want to
> > support and move the version dependent code there.Take a look at what
> > Tephra is doing: https://github.com/apache/incubator-tephra
> > They have a compat module for each supported version of HBase, and
> version
> > dependent code is "simply" copied into those modules.There's still
> > duplicate code, but at least there's one branch to maintain.
> > It's somewhat of a bigger project now.
> > Thoughts?
> > -- Lars
> >
>

Reply via email to