ocs/r3.3.4/api/org/apache/hadoop/metrics2/package-summary.html
>
> On Wed, Oct 12, 2022 at 1:39 PM Christopher wrote:
>
> > I don't think we're doing anything special to publish to JMX. I think
> this
> > is something that is a feature of Hadoop Metrics2 that we're simply
Looking at [1], specifically the overview section, I think they are the
same metrics just accessible via JMX instead of configuring a sink.
[1]
https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/metrics2/package-summary.html
On Wed, Oct 12, 2022 at 1:39 PM Christopher wrote:
> I do
I don't think we're doing anything special to publish to JMX. I think this
is something that is a feature of Hadoop Metrics2 that we're simply
enabling. So, this might be a question for the Hadoop general mailing list
if nobody knows the answer here.
On Wed, Oct 12, 2022 at 1:06 PM Logan Jones
Hello:
I'm trying to figure out more about the metrics coming out of Accumulo
1.9.3 and 1.10.2. I'm currently configuring the hadoop metrics 2 system and
sending that to influxDB. In theory, I could also look at the JMX metrics.
Are the JMX metrics a superset of what comes out of Hadoop Metrics2
Since Accumulo doesn't bundle Hadoop into the release, the only
difference this makes is whether or not it breaks our builds during
testing, which could indicate a bug in Hadoop, or an incompatibility
with that version of Hadoop. The version of Accumulo built with 3.3.0
should work perfectly fine
Hi,
I was wondering if the dev team would reconsider using the Hadoop 3.3.1
version for the next release version of Accumulo. I noticed that the hadoop
dependency version was updated to 3.3.1 briefly by
commit 3c3a91f7a4b6ea290a383a77844cabae34eaeb1f, but it was dropped back to
3.3.0 in commit
Good news: after fixing up the classpath to:
$HADOOP_PREFIX/share/hadoop/client/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/common/lib/(?!slf4j)[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/hdfs/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/mapreduce/[^.].*.jar,
$HADOOP_PREFIX/share
swap out Guava 27.0-jre with 27.0-android
On Wed, Nov 20, 2019 at 9:51 PM Arvind Shyamsundar
wrote:
>
> Hello!
> Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x
> with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9
> b
Thanks, Keith for all your inputs. FYI this cluster was deployed via. Muchos
and that accumulo-site template has:
$HADOOP_PREFIX/share/hadoop/common/[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/common/lib/(?!slf4j)[^.].*.jar,
$HADOOP_PREFIX/share/hadoop/hdfs/[^.].*.jar
Can you check that your accumulo-site.xml only adds
$HADOOP_PREFIX/share/hadoop/client/[^.].*.jar for hadoop deps for the
setting general.classpaths? Not completely sure, but I think this
will use the hadoop shaded jars.
Do not want the non-shaded hadoop jars like
$HADOOP_PREFIX/share/hadoop
Another possible path to solve this is with a different classpath and
dependency for hadoop 3. In Accumulo 2.0 we depend on the hadoop
client shaded jar, which has its own shaded and relocated version of
Guava internally. Using the Hadoop shaded jar would solve this
problem. Not sure what
I looked at the history[1] of the hadoop project pom and found that
HADOOP-16213[2] seems to be the cause of this change. So it seems like
we need to bump the guava version if we want to work with newer
versions of Hadoop 3.
One of the goals of 1.9 (and I think 1.10) is to be a bridge version
Hello!
Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x
with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9
branch and built with -Dhadoop.profile=3. When I deployed this "custom"
Accumulo build with Hadoop 3.1.3, accumulo init faile
ed early on when we'll drop hadoop 2
> support.
> >
> > As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3
> supported in 1.y releases as of 1.9.0. That gives an upgrade path so that
> folks won't have to upgrade both Hadoop and Accumulo at the same time.
&g
Yeah, if Hadoop has changed their stance, propagating a "use as your own
risk" would be sufficient from our end.
On 3/1/18 6:06 PM, Christopher wrote:
If there's a risk, I'd suggest calling things out as "experimental" in the
release notes, and encourage users to try it
ng our master branch over to
> apache hadoop 3 only (see related discussion [1]), I noticed some wording
> on the last RC[2] for Hadoop 3.0.1:
>
> > Please note:
> > * HDFS-12990. Change default NameNode RPC port back to 8020. It makes
> > incompatible changes to Hadoop
hi folks!
While reviewing things in prep for getting our master branch over to apache
hadoop 3 only (see related discussion [1]), I noticed some wording on the last
RC[2] for Hadoop 3.0.1:
> Please note:
> * HDFS-12990. Change default NameNode RPC port back to 8020. It makes
> inc
+1
On Tue, Feb 27, 2018 at 10:42 AM, Sean Busbey <bus...@apache.org> wrote:
> Let's get the discussion started early on when we'll drop hadoop 2 support.
>
> As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported in
> 1.y releases as of 1.9.0. That gives
ome change.
On 2/27/18 10:42 AM, Sean Busbey wrote:
Let's get the discussion started early on when we'll drop hadoop 2 support.
As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported in
1.y releases as of 1.9.0. That gives an upgrade path so that folks won't have
to up
+1 I'm in favor of this.
On Tue, Feb 27, 2018 at 10:42 AM Sean Busbey <bus...@apache.org> wrote:
> Let's get the discussion started early on when we'll drop hadoop 2 support.
>
> As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported
> in 1.y releases as of
Let's get the discussion started early on when we'll drop hadoop 2 support.
As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported in
1.y releases as of 1.9.0. That gives an upgrade path so that folks won't have
to upgrade both Hadoop and Accumulo at the same time.
How about
There is a 3.0.0-alpha4 release currently available as a non-snapshot
version.
I'm not sure it comes with API stability guarantees at all, IIRC the Hadoop
community is planning on providing that for their betas.
Mike
On Thu, Aug 3, 2017 at 12:17 PM, Christopher <ctubb...@apache.org>
+1 from me, too, but I'd like to review what actually changes in master for
the migration to happen. I don't know much about Hadoop 3. I'm curious what
the releases will look like (AFAIK, it's only snapshot builds right now; is
that correct?), how our dependencies will change, and what API
+1 sounds like a good idea to me.
On 8/3/17 10:08 AM, Sean Busbey wrote:
Hi Folks!
I think we need to start being more formal in planning for Hadoop 3.
They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].
What do folks think about starting to push on an Accumulo 2.0
I am in favor of going to Hadoop 3 for Accumulo 2. If we do this then
Accumulo 2 can not release until after Hadoop 3 does. Any idea when
Hadoop 3 will release?
On Thu, Aug 3, 2017 at 10:08 AM, Sean Busbey <bus...@cloudera.com> wrote:
> Hi Folks!
>
> I think we need to start be
All,
For what it's worth, I'd attempted to run both 1.7.x and 1.8.x on Hadoop
3 and ran into a fairly straightforward dependency issue [1] that, when
addressed, should allow current Accumulo versions to run on Hadoop 3.
Hopefully this means that it's not a large lift to get to the point you're
Hi Folks!
I think we need to start being more formal in planning for Hadoop 3.
They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].
What do folks think about starting to push on an Accumulo 2.0 release that
only supports Hadoop 3? It would let us move faster, which we'll need
should develop a Java application using NetBeans which in turn uses
> Accumulator, Hadoop and Zookeeper, I wanted to ask you what is the correct
> configuration for doing this with NetBeans, such as what libraries to use
> and more.
> Thanks in advance for using yours time and e
Hello everyone,
I claim I have never worked with Accumulo,
I should develop a Java application using NetBeans which in turn uses
Accumulator, Hadoop and Zookeeper, I wanted to ask you what is the correct
configuration for doing this with NetBeans, such as what libraries to use
and more.
Thanks
ook/content/format_and_start_hdfs.html>
> to
> format namenode of hadoop but that doesn't seems to work. My question is
> how can I remove all tables and have fresh start with accumulo datastore?
> Any suggestion is welcomed. Tons of thanks in advanced.
>
> Thank You
> Suresh Prajapati
>
Github user asfgit closed the pull request at:
https://github.com/apache/accumulo-testing/pull/3
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
ee
-//
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
-config.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
-config.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.
ee
-//
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
-config.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
-config.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.
ee
-//
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
-config.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
-config.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.
GitHub user mikewalch opened a pull request:
https://github.com/apache/accumulo-testing/pull/3
ACCUMULO-4579 Fixed hadoop config bug in accumulo-testing
* TestEnv now returns Hadoop configuration that is loaded from config files
but expects HADOOP_PREFIX to be set
My recent blog post about running Accumulo on Fedora 25 describes how to do
this using the RawLocalFileSystem implementation of Hadoop for Accumulo
volumes matching file://
https://accumulo.apache.org/blog/2016/12/19/running-on-fedora-25.html
This works with other packaging also, not just
t;>> > /java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61
>>> >
>>> >
>>
>> Hi Josh, are you saying that the ChecksumFileSystem is required or
>> forbidden for WAL recovery? Looking at the Hadoop code it seems that
>> LocalFileSyst
ly around WAL recovery.
>
> https://github.com/apache/accumulo/blob/master/test/src/main
> /java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61
>
>
Hi Josh, are you saying that the ChecksumFileSystem is required or
forbidden for WAL recovery? Looking at the Hadoop code it se
he/accumulo/blob/master/test/src/main
> /java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61
>
>
Hi Josh, are you saying that the ChecksumFileSystem is required or
forbidden for WAL recovery? Looking at the Hadoop code it seems that
LocalFileSystem wraps around a RawLocalFileSystem to
Accumulo is using something in Hadoop outside of the public client API, this
may not work.
[1] https://github.com/quantcast/qfs
[2] https://github.com/quantcast/qfs/wiki/Migration-Guide
-Original Message-
From: Dylan Hutchison [mailto:dhutc...@cs.washington.edu]
Sent: Monday, January 16, 20
]). If
Accumulo is using something in Hadoop outside of the public client API, this
may not work.
[1] https://github.com/quantcast/qfs
[2] https://github.com/quantcast/qfs/wiki/Migration-Guide
> -Original Message-
> From: Dylan Hutchison [mailto:dhutc...@cs.washington.edu]
Hi folks,
A friend of mine asked about running Accumulo on a normal file system in
place of Hadoop, similar to the way MiniAccumulo runs. How possible is
this, or how much work would it take to do so?
I think my friend is just interested in running on a single node, but I am
curious about both
On Fri, Jul 8, 2016 at 5:05 PM Sean Busbey <bus...@cloudera.com> wrote:
> On Fri, Jul 8, 2016 at 3:40 PM, Christopher <ctubb...@apache.org> wrote:
> > On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey <bus...@cloudera.com> wrote:
> >> Would we be bumping the Hadoop
Sean Busbey wrote:
On Fri, Jul 8, 2016 at 3:40 PM, Christopher<ctubb...@apache.org> wrote:
On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey<bus...@cloudera.com> wrote:
Would we be bumping the Hadoop version while incrementing our minor
version number or our major version number?
On Fri, Jul 8, 2016 at 3:40 PM, Christopher <ctubb...@apache.org> wrote:
> On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey <bus...@cloudera.com> wrote:
>> Would we be bumping the Hadoop version while incrementing our minor
>> version number or our major version number?
>
I'm sure I know some people trying to use Accumulo+HDFS tracing, and it's
going to cause a problem no matter what, because Hadoop and Accumulo aren't
always upgraded at the same time. I just want to make sure it gets better
at some point, if both are sufficiently up-to-date.
Backporting patches
to preserve use of the older HTrace.
On Thu, Jul 7, 2016 at 5:30 PM Billie Rinaldi <billie.rina...@gmail.com>
wrote:
> I'm in favor of bumping our Hadoop version to 2.7. We are already on the
> same htrace version as Hadoop 2.7. (The discussion in ACCUMULO-4171 is
> relevant to Hadoo
I'm in favor of bumping our Hadoop version to 2.7. We are already on the
same htrace version as Hadoop 2.7. (The discussion in ACCUMULO-4171 is
relevant to Hadoop 2.8 and later.)
Billie
On Thu, Jul 7, 2016 at 2:20 PM, Christopher <ctubb...@apache.org> wrote:
> Thinking ab
Thinking about https://issues.apache.org/jira/browse/ACCUMULO-4171, I'm of
the opinion that we should probably bump our Hadoop version to 2.7 and
HTrace version to what Hadoop is using, to keep them in sync.
Does anybody disagree?
1:50 PM Mike Drob <md...@mdrob.com> wrote:
> >
> > > I would not feel comfortable bumping the min req Hadoop in 1.7.2
> > >
> > > On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org>
> wrote:
> > >
> > > > Perhaps. But the
IIRC) using a dependency
> older than 2.6.1, how much can we really say 1.7.2 works on those older
> versions?
>
> On Thu, Jun 2, 2016 at 11:50 PM Mike Drob <md...@mdrob.com> wrote:
>
> > I would not feel comfortable bumping the min req Hadoop in 1.7.2
> >
> >
:
> I would not feel comfortable bumping the min req Hadoop in 1.7.2
>
> On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org> wrote:
>
> > Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that much
> > different in terms of support, so I figur
I would not feel comfortable bumping the min req Hadoop in 1.7.2
On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org> wrote:
> Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that much
> different in terms of support, so I figured go with the minimum
:04 PM Corey Nolet <cjno...@gmail.com> wrote:
> This may not be directly related but I've noticed Hadoop packages have been
> not uninstalling/updating well the past year or so. The last couple times
> I've run fedup, I've had to go back in manually and remove/update a bunch
> of t
This may not be directly related but I've noticed Hadoop packages have been
not uninstalling/updating well the past year or so. The last couple times
I've run fedup, I've had to go back in manually and remove/update a bunch
of the Hadoop packages like Zookeeper and Parquet.
On Thu, Jun 2, 2016
That first post was intended for the Fedora developer list. Apologies for
sending to the wrong list.
If anybody is curious, it seems the Fedora community support around Hadoop
and Big Data is really dying... the packager for Flume and HTrace has
abandoned their efforts to package for Fedora
Sorry, wrong list.
On Thu, Jun 2, 2016 at 4:20 PM Christopher <ctubb...@fedoraproject.org>
wrote:
> So, it would seem at some point, without me noticing (certainly my fault,
> for not paying attention enough), the Hadoop packages got orphaned and/or
> retired? in Fedora.
&g
So, it would seem at some point, without me noticing (certainly my fault,
for not paying attention enough), the Hadoop packages got orphaned and/or
retired? in Fedora.
This is a big problem for me, because the main package I work on is
dependent upon Hadoop.
What's the state of Hadoop in Fedora
.el...@gmail.com> wrote:
> For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
> Hadoop didn't do anything screwy that they shouldn't have in a
> maintenance release...)
>
> I have not looked at deltas between 2.6.1 and 2.6.4
>
> Christopher wrote:
> > I w
For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
Hadoop didn't do anything screwy that they shouldn't have in a
maintenance release...)
I have not looked at deltas between 2.6.1 and 2.6.4
Christopher wrote:
I was looking at the recently bumped tickets and noticed
https
I was looking at the recently bumped tickets and noticed
https://issues.apache.org/jira/browse/ACCUMULO-4150
It seems to me that we may want to make our minimum supported Hadoop
version 2.6.1, at least for the 1.8.0 release.
That's not to say it won't work with other versions... just that it's
1.7 appear busted after Christopher's asf
pom
version update.
On May 26, 2016 11:11 PM, "Josh Elser"<josh.el...@gmail.com> wrote:
Looks like hadoop-1 is still having problems on 1.6.
-- Forwarded message --
From:<els...@apache.org>
Date: May 26, 2016 8:4
ter Christopher's asf
> pom
> >>> version update.
> >>> On May 26, 2016 11:11 PM, "Josh Elser"<josh.el...@gmail.com> wrote:
> >>>
> >>>> Looks like hadoop-1 is still having problems on 1.6.
> >>>> -- Forwarded message
"Josh Elser"<josh.el...@gmail.com> wrote:
Looks like hadoop-1 is still having problems on 1.6.
-- Forwarded message --
From:<els...@apache.org>
Date: May 26, 2016 8:40 PM
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
To:<josh.el...@gmail.com>
11:11 PM, "Josh Elser" <josh.el...@gmail.com> wrote:
>>
>> > Looks like hadoop-1 is still having problems on 1.6.
>> > -- Forwarded message --
>> > From: <els...@apache.org>
>> > Date: May 26, 2016 8:40
, 2016 11:11 PM, "Josh Elser" <josh.el...@gmail.com> wrote:
>
> > Looks like hadoop-1 is still having problems on 1.6.
> > -- Forwarded message --
> > From: <els...@apache.org>
> > Date: May 26, 2016 8:40 PM
> > Subject: Accumulo-Tes
Correction, all of 1.6 and 1.7 appear busted after Christopher's asf pom
version update.
On May 26, 2016 11:11 PM, "Josh Elser" <josh.el...@gmail.com> wrote:
> Looks like hadoop-1 is still having problems on 1.6.
> -- Forwarded message --
> From: <els..
Looks like hadoop-1 is still having problems on 1.6.
-- Forwarded message --
From: <els...@apache.org>
Date: May 26, 2016 8:40 PM
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
To: <josh.el...@gmail.com>
Cc:
Accumulo-Test-1.6-Hadoop-1 - Build # 10
Looks like the 2.1 upgrade failed on 1.6 with Hadoop-1. Did you happen
to notice this, Dave?
Original Message
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1029 - Unstable!
Date: Fri, 20 May 2016 17:29:15 + (UTC)
From: els...@apache.org
To: josh.el...@gmail.com
Accumulo
sn't obvious by me asking)
>
>
> Claudia Rose wrote:
>
>> I'll be there although I don't know the other "folks" yet.
>>
>> -Original Message-
>> From: Josh Elser [mailto:josh.el...@gmail.com]
>> Sent: Thursday, May 19, 2016 11:01 AM
>>
to:josh.el...@gmail.com]
Sent: Thursday, May 19, 2016 11:01 AM
To: dev
Cc: u...@accumulo.apache.org
Subject: Accumulo folks at Hadoop Summit San Jose
Out of curiosity, are there going to be any Accumulo-folks at Hadoop Summit in
San Jose, CA at the end of June?
- Josh
I'll be there although I don't know the other "folks" yet.
-Original Message-
From: Josh Elser [mailto:josh.el...@gmail.com]
Sent: Thursday, May 19, 2016 11:01 AM
To: dev
Cc: u...@accumulo.apache.org
Subject: Accumulo folks at Hadoop Summit San Jose
Out of curiosity, are t
I'll be there! I'm looking into getting space for an Accumulo meetup
(details TBD).
On Thu, May 19, 2016 at 11:01 AM, Josh Elser <josh.el...@gmail.com> wrote:
> Out of curiosity, are there going to be any Accumulo-folks at Hadoop
> Summit in San Jose, CA at the end of June?
>
> - Josh
>
I'll be there.
Adam
On Thu, May 19, 2016 at 11:01 AM, Josh Elser <josh.el...@gmail.com> wrote:
> Out of curiosity, are there going to be any Accumulo-folks at Hadoop
> Summit in San Jose, CA at the end of June?
>
> - Josh
>
Out of curiosity, are there going to be any Accumulo-folks at Hadoop
Summit in San Jose, CA at the end of June?
- Josh
+3
v/r
Bob Thorman
Principal Big Data Engineer
ATT Big Data CoE
2900 W. Plano Parkway
Plano, TX 75075
972-658-1714
On 2/12/15, 11:13 PM, Josh Elser josh.el...@gmail.com wrote:
FYI -- Billie and I have submitted a talk to Hadoop Summit 2015 in San
Jose, CA in June.
http
FYI -- Billie and I have submitted a talk to Hadoop Summit 2015 in San
Jose, CA in June.
http://hadoopsummit.uservoice.com/forums/283260-committer-track/suggestions/7073993-a-year-in-the-life-of-apache-accumulo
I'd be overjoyed if anyone would vote for the talk if they'd like to see
it happen
Bringing this out of the normal JIRA notification spam.
Your opinions are appreciated.
Original Message
Subject: [jira] [Commented] (ACCUMULO-1817) Use Hadoop Metrics2
Date: Wed, 3 Dec 2014 17:12:13 + (UTC)
From: Josh Elser (JIRA) j...@apache.org
Reply-To: j...@apache.org
Did we make a decision on this?
--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Sun, Nov 16, 2014 at 11:16 PM, Christopher ctubb...@apache.org wrote:
On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com wrote:
Have we already decided to drop Hadoop 1 for 1.7.0? I can't
:16 PM, Christopher ctubb...@apache.org wrote:
On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com
wrote:
Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
anymore.
If we haven't decided to do so already, I'd like to suggest doing so.
- Josh
I
, Josh Elser josh.el...@gmail.com
mailto:josh.el...@gmail.com wrote:
Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
anymore.
If we haven't decided to do so already, I'd like to suggest
doing so.
- Josh
I thought
On Sun, Nov 16, 2014 at 11:16 PM, Christopher ctubb...@apache.org
mailto:ctubb...@apache.org wrote:
On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com
mailto:josh.el...@gmail.com wrote:
Have we already decided to drop Hadoop 1 for 1.7.0? I can't
I'd be +1 to drop Hadoop 1 from 1.7.
IIRC, we had loose consensus to drop it for 2.0. I don't know how much of
that support was because 2.0 is a very major version change.
It's worth keeping in mind that jdk6 support is also dropped as of the 1.7
release, which means for some people making
+cc user@
I don't believe a vote is necessary unless anyone actually has an
objection. Let's have a discussion and if there are any concerns by devs
at the end, we can call a vote.
Anyone should feel free to state their opinion on the subject.
Sean Busbey wrote:
I'd be +1 to drop Hadoop 1
On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com wrote:
Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
anymore.
If we haven't decided to do so already, I'd like to suggest doing so.
- Josh
I thought we had discussed this for 2.0.0, but that was when
://issues.apache.org/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
rely on
* Changed to use codec 1.4 'shaHex
---
updated test info for ITs passing.
Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non
.
Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
rely
---
tests all good
Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
rely on
* Changed to use codec 1.4 'shaHex' method
Diffs
-
core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
---
updates per feedback.
Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior
for accumulo.
Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
rely on
* Changed to use codec 1.4 'shaHex' method
Diffs
-
core/src/main
/jira/browse/ACCUMULO-2791
Repository: accumulo
Description
---
ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
* Povide a core.util Base64 class to enforce the non-chunked behavior we
rely on
* Changed to use codec 1.4 'shaHex' method
Diffs
Just announced, an Accumulo Birds of a Feather session at the Hadoop Summit:
http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179840512/
It looks like we have an hour and a half, exact schedule TBD. Feel free to
contact me if there is any particular content you'd like to see
for accumulo, Sean Busbey, Eric Newton, and Josh Elser.
Bugs: ACCUMULO-2566
https://issues.apache.org/jira/browse/ACCUMULO-2566
Repository: accumulo
Description
---
ACCUMULO-2566 Hadoop reflection for TeraSortIngest
ACCUMULO-2566 Add more Counter-based reflection
, Sean Busbey, Eric Newton, and Josh
Elser.
Changes
---
Found another place where we needed to reflect.
Summary (updated)
-
ACCUMULO-2564 Backport changes to unify Hadoop 1/2
Bugs: ACCUMULO-2564
https://issues.apache.org/jira/browse/ACCUMULO-2564
Repository: accumulo
-2564 Replace more hadoop 1/2 incompat
Found more instances of context.getConfiguration that need to be reflected
ACCUMULO-2564 Swap out AIF to use reflection
Use reflections to get the configuration from the context for hadoop 1/2
ACCUMULO-2564 Backport changes from ACCUMULO-1809
Author: Eric
, Sean Busbey, Eric Newton, and Josh
Elser.
Bugs: ACCUMULO-2564
https://issues.apache.org/jira/browse/ACCUMULO-2564
Repository: accumulo
Description
---
ACCUMULO-2564 Replace more hadoop 1/2 incompat
Found more instances of context.getConfiguration that need to be reflected
ACCUMULO
for accumulo, Sean Busbey, Eric Newton, and Josh Elser.
Bugs: ACCUMULO-2566
https://issues.apache.org/jira/browse/ACCUMULO-2566
Repository: accumulo
Description
---
ACCUMULO-2566 Hadoop reflection for TeraSortIngest
ACCUMULO-2566 Add more Counter-based reflection
1 - 100 of 247 matches
Mail list logo