I agree that we should be prioritizing compatibility with Apache Hadoop
in our official releases.
I believe documenting some procedures to build against every other 3rd
party version is acceptable/sufficient since we have the sources out
there too. I'm also using the word "documenting" very loosely -- a page
on our site, a README with Maven commands, or even just in an email on
this list (indexed by search engines).
On 4/25/13 3:32 PM, Keith Turner wrote:
On Thu, Apr 25, 2013 at 2:54 PM, John Vines <[email protected]> wrote:
What about CDH3U5+ and CDH4? They also require some specialized packaging
as well.
Maybe only Apache Hadoop should be supported by Apache Accumulo? Cloudera
could package a downstream version of Accumulo that works w/ their
downstream version of Hadoop if they wanted.
On Thu, Apr 25, 2013 at 2:32 PM, Christopher <[email protected]> wrote:
So, I have a process in place for releasing the tarballs, rpms, debs,
jars, PDFs, etc. using the maven-release-plugin, that signs and seals
everything and deploys to the staging repository for voting. I'm still
polishing it before I commit it.
However, I've not figured out the best way to generate and release the
hadoop2 variants. They should be released with a classifier to
indicate they are for hadoop2, if they are released, but our build
isn't exactly set up to produce two artifacts per module, and neither
are our scripts capable of dealing with artifacts with classifiers in
them.
My opinion is that we should release for Hadoop 1.0, but support
building from source against 2.0. Since 2.0 is still beta, this seems
acceptable to me, and we can try to do better support for packaging
for 2.0 in Accumulo 1.6.0, with tickets such as ACCUMULO-210 and the
like.
--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Thu, Apr 25, 2013 at 2:09 PM, Keith Turner <[email protected]> wrote:
On Thu, Apr 25, 2013 at 2:03 PM, John Vines <[email protected]> wrote:
Yes
Ok, I vaguely remember discussion of this on a ticket or in mailing
list.
Do you know the details? Is this caused by something hadoop is doing,
or
is it how we are using Hadoop? Can we change something in Accumulo to
avoid this?
On Thu, Apr 25, 2013 at 1:56 PM, Keith Turner <[email protected]>
wrote:
On Thu, Apr 25, 2013 at 1:48 PM, John Vines <[email protected]>
wrote:
Has anyone put any thought into how we're going to release 1.5,
considering
the special cases needed for the various hadoop releases? I'm not
only
talking about distributions, but also the jars released to
central.
Does compiling against Hadoop 1 result in Accumulo class files that
will
not work w/ Hadoop 2?
--
Cheers
~John