Has anyone put any thought into how we're going to release 1.5, considering
the special cases needed for the various hadoop releases? I'm not only
talking about distributions, but also the jars released to central.
--
Cheers
~John
On Thu, Apr 25, 2013 at 1:48 PM, John Vines wrote:
> Has anyone put any thought into how we're going to release 1.5, considering
> the special cases needed for the various hadoop releases? I'm not only
> talking about distributions, but also the jars released to central.
>
>
Does compiling agains
Yes
On Thu, Apr 25, 2013 at 1:56 PM, Keith Turner wrote:
> On Thu, Apr 25, 2013 at 1:48 PM, John Vines wrote:
>
> > Has anyone put any thought into how we're going to release 1.5,
> considering
> > the special cases needed for the various hadoop releases? I'm not only
> > talking about distrib
On Thu, Apr 25, 2013 at 2:03 PM, John Vines wrote:
> Yes
>
Ok, I vaguely remember discussion of this on a ticket or in mailing list.
Do you know the details? Is this caused by something hadoop is doing, or
is it how we are using Hadoop? Can we change something in Accumulo to
avoid this?
>
>
So, I have a process in place for releasing the tarballs, rpms, debs,
jars, PDFs, etc. using the maven-release-plugin, that signs and seals
everything and deploys to the staging repository for voting. I'm still
polishing it before I commit it.
However, I've not figured out the best way to generate
What about CDH3U5+ and CDH4? They also require some specialized packaging
as well.
On Thu, Apr 25, 2013 at 2:32 PM, Christopher wrote:
> So, I have a process in place for releasing the tarballs, rpms, debs,
> jars, PDFs, etc. using the maven-release-plugin, that signs and seals
> everything and
Like what?
Are our hadoop1 and hadoop2 artifacts not binary compatible with those?
In any case, I think that's why it's important to offer a
source-release... we shouldn't be trying to build separate artifacts
for every possible 3rd party variant of Hadoop. So long as there's a
path forward for t
On Thu, Apr 25, 2013 at 2:54 PM, John Vines wrote:
> What about CDH3U5+ and CDH4? They also require some specialized packaging
> as well.
>
Maybe only Apache Hadoop should be supported by Apache Accumulo? Cloudera
could package a downstream version of Accumulo that works w/ their
downstream ve
I agree that we should be prioritizing compatibility with Apache Hadoop
in our official releases.
I believe documenting some procedures to build against every other 3rd
party version is acceptable/sufficient since we have the sources out
there too. I'm also using the word "documenting" very lo
Except we need to consider accessibility and the amount of pain we may be
inflicting upon ourselves.
CDH is used by a lot of people, so by keeping barriers in place to slow
down trials by users is going to hurt us. And we're also going to be hurt
by those users, and the ones running hadoop 2, beca
On Thu, Apr 25, 2013 at 3:46 PM, John Vines wrote:
> Except we need to consider accessibility and the amount of pain we may be
> inflicting upon ourselves.
>
> CDH is used by a lot of people, so by keeping barriers in place to slow
> down trials by users is going to hurt us. And we're also going
I don't think there are any issues with having binary-compatible
releases as it's the same source underneath.
In other words, our source doesn't change whether we compile against
CDH, HDP, Apache, etc. That makes me think that we should be fine in
creating binary-only releases for the Hadoop o
On Thu, Apr 25, 2013 at 4:06 PM, Josh Elser wrote:
> I don't think there are any issues with having binary-compatible releases
> as it's the same source underneath.
>
> In other words, our source doesn't change whether we compile against CDH,
> HDP, Apache, etc. That makes me think that we should
On Thu, Apr 25, 2013 at 4:30 PM, Keith Turner wrote:
> On Thu, Apr 25, 2013 at 4:06 PM, Josh Elser wrote:
>
> > I don't think there are any issues with having binary-compatible releases
> > as it's the same source underneath.
> >
> > In other words, our source doesn't change whether we compile a
On Thu, Apr 25, 2013 at 4:41 PM, Benson Margulies wrote:
> On Thu, Apr 25, 2013 at 4:30 PM, Keith Turner wrote:
>
> > On Thu, Apr 25, 2013 at 4:06 PM, Josh Elser
> wrote:
> >
> > > I don't think there are any issues with having binary-compatible
> releases
> > > as it's the same source underneat
Does it make sense to put vendor-specific stuff under a contribs/vendors
directory? Doing so would certainly indicate that we are vendor-agnostic.
And give vendors an obvious place to contribute.
I'm not sure we are talking about actual vendor-specific code. We are
deciding whether or not to create additional release tarballs that have
been compiled against various vendors' Hadoop-compatible file systems.
Assuming that we determine there is nothing prohibiting us from doing this,
I think i
I had issues running a hadoop2 compiled version of accumulo against CDH4, I
can't remember the specifics of it though.
When I said specialized packaging, I was thinking of a naming convention to
distinguish hadoop1 vs. hadoop2 ( vs. vendor-specific hadoop) compiled jars.
On Fri, Apr 26, 2013 at
On Fri, Apr 26, 2013 at 1:35 PM, John Vines wrote:
> I had issues running a hadoop2 compiled version of accumulo against CDH4, I
> can't remember the specifics of it though.
>
I would hope that would be due to Hadoop 2's alpha state. I guess we'll
have to wait and see.
>
> When I said special
John, the preferred naming convention, is to use classifiers (maven
terminology), which results in file names such as:
--.jar; but this is best done as a
conscious decision to produce multiple variants of the same artifact.
It doesn't work that well in Maven when you have to recompile the same
arti
Funny enough, I gothit by these shenanigans last night when I was trying
to run trunk against CDH3 locally. After working through jars that were
marked asprovidedand weren't, and then running into
https://issues.apache.org/jira/browse/ACCUMULO-837, I threw in the towel
and called it a night.
I've always been an advocate of sticking to vanilla compatibility, but
maintaining ability to be compatible with other versions. Hadoop 2ish
things are the first case where we are beginning to see broken run-time
compatibility due to some API changes. While the fragmented state of hadoop
creates a
I would also like to point out that hbase is putting out separate releases
for hadoop1 and hadoop2 (
http://www.apache.org/dyn/closer.cgi/hbase/hbase-0.95.0). They also have
support for both via maven, however they implemented a compatibility module
(https://issues.apache.org/jira/browse/HBASE-6405
I would love to deploy additional artifacts using classifiers for
hadoop2. We may be able to support that for the jar artifacts in
Maven, with some minor profile tweaks to the POM. (Apache
infrastructure actually allows you to deploy many artifacts to a
staging repo, before closing that staging rep
I am more than content with that assessment
On Tue, May 7, 2013 at 11:23 AM, Christopher wrote:
> I would love to deploy additional artifacts using classifiers for
> hadoop2. We may be able to support that for the jar artifacts in
> Maven, with some minor profile tweaks to the POM. (Apache
> in
How many people are working full-time on hbase development?
On Tue, May 7, 2013 at 11:28 AM, John Vines wrote:
> I am more than content with that assessment
>
>
> On Tue, May 7, 2013 at 11:23 AM, Christopher wrote:
>
> > I would love to deploy additional artifacts using classifiers for
> > had
26 matches
Mail list logo