I also just snuck in that Hadoop 1/2 compatibility fix with JobContext
(ACCUMULO-1421). Not sure if that's the only change needed, but it should
be a step forward.

Adam



On Thu, May 16, 2013 at 11:23 AM, Eric Newton <eric.new...@gmail.com> wrote:

> I've snuck some necessary changes in... doing integration testing on it
> right now.
>
> -Eric
>
>
>
> On Wed, May 15, 2013 at 8:03 PM, John Vines <vi...@apache.org> wrote:
>
> > I will gladly do it next week, but I'd rather not have it delay the
> > release. The question from there is, is doing this type of packaging
> change
> > too large to put in 1.5.1?
> >
> >
> > On Wed, May 15, 2013 at 2:44 PM, Christopher <ctubb...@apache.org>
> wrote:
> >
> > > So, I think that'd be great, if it works, but who is willing to do
> > > this work and get it in before I make another RC?
> > > I'd like to cut RC3 tomorrow if I have time. So, feel free to patch
> > > these in to get it to work before then... or, by the next RC if RC3
> > > fails to pass a vote.
> > >
> > > --
> > > Christopher L Tubbs II
> > > http://gravatar.com/ctubbsii
> > >
> > >
> > > On Wed, May 15, 2013 at 5:31 PM, Adam Fuchs <afu...@apache.org> wrote:
> > > > It seems like the ideal option would be to have one binary build that
> > > > determines Hadoop version and switches appropriately at runtime. Has
> > > anyone
> > > > attempted to do this yet, and do we have an enumeration of the places
> > in
> > > > Accumulo code where the incompatibilities show up?
> > > >
> > > > One of the incompatibilities is in
> > org.apache.hadoop.mapreduce.JobContext
> > > > switching between an abstract class and an interface. This can be
> fixed
> > > > with something to the effect of:
> > > >
> > > >   public static Configuration getConfiguration(JobContext context) {
> > > >     Impl impl = new Impl();
> > > >     Configuration configuration = null;
> > > >     try {
> > > >       Class c =
> > > >
> > >
> >
> TestCompatibility.class.getClassLoader().loadClass("org.apache.hadoop.mapreduce.JobContext");
> > > >       Method m = c.getMethod("getConfiguration");
> > > >       Object o = m.invoke(context, new Object[0]);
> > > >       configuration = (Configuration)o;
> > > >     } catch (Exception e) {
> > > >       throw new RuntimeException(e);
> > > >     }
> > > >     return configuration;
> > > >   }
> > > >
> > > > Based on a test I just ran, using that getConfiguration method
> instead
> > of
> > > > just calling the getConfiguration method on context should avoid the
> > one
> > > > incompatibility. Maybe with a couple more changes like that we can
> get
> > > down
> > > > to one bytecode release for all known Hadoop versions?
> > > >
> > > > Adam
> > >
> >
>

Reply via email to