> There's no reason our HDFS usage should be exposed in the HBase client
code, and I think the application classpath feature for YARN in that
version can isolate us on the MR side.

I was thinking more the case where we have to bump our version of Guava
because our version and Hadoop's version are mutually incompatible and
causing compilation failures or runtime failures or both. This was a thing
once. Would it be possible to have different dependencies specified for
client and server Maven projects? I suppose we could hack this, though it
would be ugly.


On Fri, Mar 13, 2015 at 9:43 AM, Sean Busbey <bus...@cloudera.com> wrote:

> On Fri, Mar 13, 2015 at 11:18 AM, Andrew Purtell <apurt...@apache.org>
> wrote:
>
> > > I'm -1 (non-binding) on weakening our compatibility promises. The more
> > we can
> > isolate our users from the impact of changes upstream the better.
> >
> > We can't though in general. Making compatibility promises we can't keep
> > because our upstreams don't (see the dependencies section of Hadoop's
> > compatibility guidelines) is ultimately an untenable position. *If* we
> had
> > some complete dependency isolation for MapReduce and coprocessors
> committed
> > then this could be a different conversation. Am I misstating this?
> >
>
>
> > ​In this specific instance we do have another option, so we could defer
> > this to a later time when a really unavoidable dependency change
> happens...
> > like a Guava update affecting HDFS. (We had one of those before.) We can
> > document the Jackson classpath issue with Hadoop >= 2.6 and provide
> > remediation advice in the troubleshooting section of the manual.
> >
> >
> I think we can solve this generally for Hadoop 2.6.0+. There's no reason
> our HDFS usage should be exposed in the HBase client code, and I think the
> application classpath feature for YARN in that version can isolate us on
> the MR side. I am willing to do this work in time for 1.1. Realistically I
> don't know the timeline for that version yet. If it turns out the work is
> more involved or my time is more constrained then I think, I'm willing to
> accept promise weakening as a practical matter.
>
> I'd be much more comfortable weakening our dependency promises for
> coprocessor than doing it in general. Folks running coprocessors should
> already be more risk tolerant and familiar with our internals.
>
> For upstreams that don't have the leverage on us of Hadoop, we solve this
> problem simply by not updating dependencies that we can't trust to not
> break our downstreams.
>
>
>
> > I would be disappointed to see a VOTE thread. That means we failed to
> reach
> > consensus and needed to fall back to process to resolve differences.
> >
> >
>
> That's fair. What about the wider audience issue on user@? There's no
> reason our DISCUSS threads couldn't go there as well.
>
>
>
> > Why don't we do the doc update and call it a day?
> >
>
> I've been burned by dependency changes in projects I rely on many times in
> the past, usually over changes in code sections that folks didn't think
> were likely to be used. So I'm very willing to do work now to save
> downstream users of HBase that same headache.
>
> --
> Sean
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)

Reply via email to