On Fri, Jul 8, 2016 at 5:05 PM Sean Busbey <bus...@cloudera.com> wrote:

> On Fri, Jul 8, 2016 at 3:40 PM, Christopher <ctubb...@apache.org> wrote:
> > On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey <bus...@cloudera.com> wrote:
> >> Would we be bumping the Hadoop version while incrementing our minor
> >> version number or our major version number?
> >>
> >>
> >>
> > Minor only, because it's not a breaking change necessarily, and it's
> > unrelated to API. It'd still be reasonable for somebody to patch the 1.x
> > version to use the earlier Hadoop/HTrace versions easily.
> >
> > Specifically, I was thinking for 1.8.0. Since H2.8 isn't out yet, that'd
> > mean either no change in 1.8.0, or a change to make it sync up with H2.7.
>
> My only concern would be that updating our listed Hadoop dependency
> version would make it easy for someone to accidentally rely on a
> Hadoop API call that wasn't in earlier versions, which would then make
> it harder for an interested person to patch their 1.y version to use
> the earlier Hadoop version.
>
> HBase checks compilation against different hadoop versions in their
> precommit checks. We could add something like that to our nightly
> builds maybe?
>
> Now that we're discussing it, I can't actually remember if we ever
> documented what version(s) of Hadoop we expect to work with. So maybe
> updating to the latest minor release of 2.y on each Accumulo 1.y minor
> release can just be our new thing.
>
> --
> busbey
>

I don't know that we'd have to update every time... but we can certainly
make it a point to consider prior to releasing.

Personally, I'm okay with newer versions of Accumulo requiring newer
versions of Hadoop, and using newer APIs which don't work on older Hadoops.
What we release is a baseline anyway... if users have specific needs for
specific deployments, they may have to do some
backporting/patching/dependency convergence/integration, and I think that's
okay. We can even try to help them along on the mailing lists when this
occurs. I don't think it's reasonable for us to try to make long-term
guarantees about being able to run on such a wide range of versions of
Hadoop. It's just not tenable to do that sort of thing upstream. We can be
cognizant and helpful, but sometimes it's easier to keep development going
by moving to newer deps.

Reply via email to