Re: Hadoop Metrics2 and JMX

2022-10-13 Thread Logan Jones
Thank you all very much for the responses!

On Wed, Oct 12, 2022 at 2:06 PM Dave Marion  wrote:

> Looking at [1], specifically the overview section, I think they are the
> same metrics just accessible via JMX instead of configuring a sink.
>
> [1]
>
> https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/metrics2/package-summary.html
>
> On Wed, Oct 12, 2022 at 1:39 PM Christopher  wrote:
>
> > I don't think we're doing anything special to publish to JMX. I think
> this
> > is something that is a feature of Hadoop Metrics2 that we're simply
> > enabling. So, this might be a question for the Hadoop general mailing
> list
> > if nobody knows the answer here.
> >
> > On Wed, Oct 12, 2022 at 1:06 PM Logan Jones 
> wrote:
> >
> > > Hello:
> > >
> > > I'm trying to figure out more about the metrics coming out of Accumulo
> > > 1.9.3 and 1.10.2. I'm currently configuring the hadoop metrics 2 system
> > and
> > > sending that to influxDB. In theory, I could also look at the JMX
> > metrics.
> > >
> > > Are the JMX metrics a superset of what comes out of Hadoop Metrics2?
> > >
> > > Thanks in advance,
> > >
> > > - Logan
> > >
> >
>


Re: Hadoop Metrics2 and JMX

2022-10-12 Thread Dave Marion
Looking at [1], specifically the overview section, I think they are the
same metrics just accessible via JMX instead of configuring a sink.

[1]
https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/metrics2/package-summary.html

On Wed, Oct 12, 2022 at 1:39 PM Christopher  wrote:

> I don't think we're doing anything special to publish to JMX. I think this
> is something that is a feature of Hadoop Metrics2 that we're simply
> enabling. So, this might be a question for the Hadoop general mailing list
> if nobody knows the answer here.
>
> On Wed, Oct 12, 2022 at 1:06 PM Logan Jones  wrote:
>
> > Hello:
> >
> > I'm trying to figure out more about the metrics coming out of Accumulo
> > 1.9.3 and 1.10.2. I'm currently configuring the hadoop metrics 2 system
> and
> > sending that to influxDB. In theory, I could also look at the JMX
> metrics.
> >
> > Are the JMX metrics a superset of what comes out of Hadoop Metrics2?
> >
> > Thanks in advance,
> >
> > - Logan
> >
>


Re: Hadoop Metrics2 and JMX

2022-10-12 Thread Christopher
I don't think we're doing anything special to publish to JMX. I think this
is something that is a feature of Hadoop Metrics2 that we're simply
enabling. So, this might be a question for the Hadoop general mailing list
if nobody knows the answer here.

On Wed, Oct 12, 2022 at 1:06 PM Logan Jones  wrote:

> Hello:
>
> I'm trying to figure out more about the metrics coming out of Accumulo
> 1.9.3 and 1.10.2. I'm currently configuring the hadoop metrics 2 system and
> sending that to influxDB. In theory, I could also look at the JMX metrics.
>
> Are the JMX metrics a superset of what comes out of Hadoop Metrics2?
>
> Thanks in advance,
>
> - Logan
>


Hadoop Metrics2 and JMX

2022-10-12 Thread Logan Jones
Hello:

I'm trying to figure out more about the metrics coming out of Accumulo
1.9.3 and 1.10.2. I'm currently configuring the hadoop metrics 2 system and
sending that to influxDB. In theory, I could also look at the JMX metrics.

Are the JMX metrics a superset of what comes out of Hadoop Metrics2?

Thanks in advance,

- Logan


Re: Reconsider Hadoop 3.3.1 for 2.1.0 release

2022-05-19 Thread Christopher
Since Accumulo doesn't bundle Hadoop into the release, the only
difference this makes is whether or not it breaks our builds during
testing, which could indicate a bug in Hadoop, or an incompatibility
with that version of Hadoop. The version of Accumulo built with 3.3.0
should work perfectly fine with 3.3.1. If you want to actually build
with 3.3.1 on the classpath during compilation time, though, it's
trivial to add -Dhadoop.version=3.3.1 to the Maven command line.

I'm looking at the commit you referenced, and I don't see how that has
anything to do with addressing the test that was flaky.

If you are correct in your assessment that the test was overwhelming
MiniDFS, then that would still be a bug... perhaps a test bug... but
either way, the failure is not expected and should be addressed. If we
bump the version, we'd have to make sure it doesn't break the build...
and that means fixing whatever test issue was causing it. If you're
willing to contribute a pull request to do that work, we can consider
including it. However, as I said initially, it doesn't really
matter... the version specified in the POM is only the version being
built with/tested with by default... it's not bundled into Accumulo in
any way. It's just a fixed version whose API we expect to be
compatible with.

If we do bump it, it should be to 3.3.3, which is the latest 3.3 version.

On Thu, May 19, 2022 at 5:26 PM Chris Bevard  wrote:
>
> Hi,
>
> I was wondering if the dev team would reconsider using the Hadoop 3.3.1
> version for the next release version of Accumulo. I noticed that the hadoop
> dependency version was updated to 3.3.1 briefly by
> commit 3c3a91f7a4b6ea290a383a77844cabae34eaeb1f, but it was dropped back to
> 3.3.0 in commit 48679fef73e246de52fbeecad03f974f2116b97a shortly after.
> The explanation for undoing the change was that hadoop 3.3.1 was causing
> intermittent IT failures, most frequently in the CountNameNodeOpsBulkIT.
>
> I checked out that commit myself and also noticed that the
> CountNameNodeOpsBulkIT was failing often with an IOException "Unable to
> close file because the last block...does not have enough number of
> replicas", which I don't believe is indicative of a bug in hadoop or the
> accumulo code. I think what's more likely is that the multithreaded test
> was overwhelming the minidfs cluster with requests. I'm not sure which
> default value/behavior was updated in hadoop 3.3.1 that would cause the
> minicluster to blow up where it wasn't previously in 3.3.0, but I noticed
> in later commits the issue was resolved. It looks like the ClientContext
> changes in the very next code change (commit
> 4b66b96b8f6c65c390fc26c11acf8c51cb78d858) resolve the IT failures that were
> the reason for moving the hadoop version back to 3.3.0. If you check out
> that or any later commit, and update the hadoop dependency version in the
> parent pom to 3.3.1, then the IT failures are resolved.
>
> The reason I'd like the hadoop 3.3.1 version to make it into the next
> release version of Accumulo is because I've been experimenting with
> Accumulo using S3 as the underlying file system. This change (
> https://issues.apache.org/jira/browse/HADOOP-17597) that was added in
> hadoop 3.3.1 makes it possible to use the S3AFileSystem defined in
> hadoop-aws with Accumulo to replace HDFS with S3. The only change needed is
> to update the manager.walog.closer.implementation property and supply an
> S3LogCloser implementation on the classpath.
>
> Thanks,
> Chris


Reconsider Hadoop 3.3.1 for 2.1.0 release

2022-05-19 Thread Chris Bevard
Hi,

I was wondering if the dev team would reconsider using the Hadoop 3.3.1
version for the next release version of Accumulo. I noticed that the hadoop
dependency version was updated to 3.3.1 briefly by
commit 3c3a91f7a4b6ea290a383a77844cabae34eaeb1f, but it was dropped back to
3.3.0 in commit 48679fef73e246de52fbeecad03f974f2116b97a shortly after.
The explanation for undoing the change was that hadoop 3.3.1 was causing
intermittent IT failures, most frequently in the CountNameNodeOpsBulkIT.

I checked out that commit myself and also noticed that the
CountNameNodeOpsBulkIT was failing often with an IOException "Unable to
close file because the last block...does not have enough number of
replicas", which I don't believe is indicative of a bug in hadoop or the
accumulo code. I think what's more likely is that the multithreaded test
was overwhelming the minidfs cluster with requests. I'm not sure which
default value/behavior was updated in hadoop 3.3.1 that would cause the
minicluster to blow up where it wasn't previously in 3.3.0, but I noticed
in later commits the issue was resolved. It looks like the ClientContext
changes in the very next code change (commit
4b66b96b8f6c65c390fc26c11acf8c51cb78d858) resolve the IT failures that were
the reason for moving the hadoop version back to 3.3.0. If you check out
that or any later commit, and update the hadoop dependency version in the
parent pom to 3.3.1, then the IT failures are resolved.

The reason I'd like the hadoop 3.3.1 version to make it into the next
release version of Accumulo is because I've been experimenting with
Accumulo using S3 as the underlying file system. This change (
https://issues.apache.org/jira/browse/HADOOP-17597) that was added in
hadoop 3.3.1 makes it possible to use the S3AFileSystem defined in
hadoop-aws with Accumulo to replace HDFS with S3. The only change needed is
to update the manager.walog.closer.implementation property and supply an
S3LogCloser implementation on the classpath.

Thanks,
Chris


RE: Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-21 Thread Arvind Shyamsundar
Good news: after fixing up the classpath to:

  $HADOOP_PREFIX/share/hadoop/client/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/common/lib/(?!slf4j)[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/hdfs/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/mapreduce/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/yarn/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/yarn/lib/jersey.*.jar

init works, and tservers also start up correctly. I am going to test a bit more 
and if this proves out 100% then I will push a PR to Muchos to incorporate this.

Once again, thank you!

-Original Message-
From: Arvind Shyamsundar 
Sent: Thursday, November 21, 2019 10:29 AM
To: dev@accumulo.apache.org
Subject: RE: Issues building 1.9-snapshot and Hadoop 3.1.3

Thanks, Keith for all your inputs. FYI this cluster was deployed via. Muchos 
and that accumulo-site template has:

  $HADOOP_PREFIX/share/hadoop/common/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/common/lib/(?!slf4j)[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/hdfs/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/mapreduce/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/yarn/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/yarn/lib/jersey.*.jar

I will try modifying this and get back.

Thanks again!

-Original Message-
From: Keith Turner  
Sent: Thursday, November 21, 2019 9:59 AM
To: Accumulo Dev List 
Subject: Re: Issues building 1.9-snapshot and Hadoop 3.1.3

Can you check that your accumulo-site.xml only adds 
$HADOOP_PREFIX/share/hadoop/client/[^.].*.jar for hadoop deps for the setting 
general.classpaths?  Not completely sure, but I think this will use the hadoop 
shaded jars.

Do not want the non-shaded hadoop jars like 
$HADOOP_PREFIX/share/hadoop/common/[^.].*.jar on the path.

On Wed, Nov 20, 2019 at 10:51 PM Arvind Shyamsundar 
 wrote:
>
> Hello!
> Per this 
> issue(https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Faccumulo%2Fissues%2F569data=02%7C01%7Carvindsh%40microsoft.com%7C87c861e6940f4e6a80fe08d76eac7beb%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637099559408252616sdata=wzAizSdxYMZHHkAtqXBvq5TUwj77sTovcr2%2BuZ1Zcnw%3Dreserved=0)
>  building 1.9.x with Hadoop 3 support needs hadoop.profile=3. So I checked 
> out current 1.9 branch and built with -Dhadoop.profile=3. When I deployed 
> this "custom" Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at 
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvi
> der.(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I 
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version 
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
> problems with modernizer. Without disabling modernizer, the refactor involved 
> looks non-trivial. I also had issues with outdated interfaces in 
> DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
> FWIW, I pushed my changes here: 
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Faccumulo%2Fcompare%2Fmaster...arvindshmicrosoft%3Atemp-1.9-guava27data=02%7C01%7Carvindsh%40microsoft.com%7C87c861e6940f4e6a80fe08d76eac7beb%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637099559408252616sdata=cacxPXl9WL0u4NhRAPljtJuo2Gfcf1uWmNACv%2FbJkuY%3Dreserved=0.
>
> So my question is: are these known issues with the current 1.9 branch and 
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.


Re: Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-21 Thread Sean Busbey
swap out Guava 27.0-jre with 27.0-android

On Wed, Nov 20, 2019 at 9:51 PM Arvind Shyamsundar
 wrote:
>
> Hello!
> Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x 
> with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9 
> branch and built with -Dhadoop.profile=3. When I deployed this "custom" 
> Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at 
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I 
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version 
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
> problems with modernizer. Without disabling modernizer, the refactor involved 
> looks non-trivial. I also had issues with outdated interfaces in 
> DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
> FWIW, I pushed my changes here: 
> https://github.com/apache/accumulo/compare/master...arvindshmicrosoft:temp-1.9-guava27.
>
> So my question is: are these known issues with the current 1.9 branch and 
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.



-- 
busbey


RE: Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-21 Thread Arvind Shyamsundar
Thanks, Keith for all your inputs. FYI this cluster was deployed via. Muchos 
and that accumulo-site template has:

  $HADOOP_PREFIX/share/hadoop/common/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/common/lib/(?!slf4j)[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/hdfs/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/mapreduce/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/yarn/[^.].*.jar,
  $HADOOP_PREFIX/share/hadoop/yarn/lib/jersey.*.jar

I will try modifying this and get back.

Thanks again!

-Original Message-
From: Keith Turner  
Sent: Thursday, November 21, 2019 9:59 AM
To: Accumulo Dev List 
Subject: Re: Issues building 1.9-snapshot and Hadoop 3.1.3

Can you check that your accumulo-site.xml only adds 
$HADOOP_PREFIX/share/hadoop/client/[^.].*.jar for hadoop deps for the setting 
general.classpaths?  Not completely sure, but I think this will use the hadoop 
shaded jars.

Do not want the non-shaded hadoop jars like 
$HADOOP_PREFIX/share/hadoop/common/[^.].*.jar on the path.

On Wed, Nov 20, 2019 at 10:51 PM Arvind Shyamsundar 
 wrote:
>
> Hello!
> Per this 
> issue(https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Faccumulo%2Fissues%2F569data=02%7C01%7Carvindsh%40microsoft.com%7C87c861e6940f4e6a80fe08d76eac7beb%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637099559408252616sdata=wzAizSdxYMZHHkAtqXBvq5TUwj77sTovcr2%2BuZ1Zcnw%3Dreserved=0)
>  building 1.9.x with Hadoop 3 support needs hadoop.profile=3. So I checked 
> out current 1.9 branch and built with -Dhadoop.profile=3. When I deployed 
> this "custom" Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at 
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvi
> der.(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I 
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version 
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
> problems with modernizer. Without disabling modernizer, the refactor involved 
> looks non-trivial. I also had issues with outdated interfaces in 
> DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
> FWIW, I pushed my changes here: 
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Faccumulo%2Fcompare%2Fmaster...arvindshmicrosoft%3Atemp-1.9-guava27data=02%7C01%7Carvindsh%40microsoft.com%7C87c861e6940f4e6a80fe08d76eac7beb%7C72f988bf86f141af91ab2d7cd011db47%7C0%7C0%7C637099559408252616sdata=cacxPXl9WL0u4NhRAPljtJuo2Gfcf1uWmNACv%2FbJkuY%3Dreserved=0.
>
> So my question is: are these known issues with the current 1.9 branch and 
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.


Re: Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-21 Thread Keith Turner
Can you check that your accumulo-site.xml only adds
$HADOOP_PREFIX/share/hadoop/client/[^.].*.jar for hadoop deps for the
setting general.classpaths?  Not completely sure, but I think this
will use the hadoop shaded jars.

Do not want the non-shaded hadoop jars like
$HADOOP_PREFIX/share/hadoop/common/[^.].*.jar on the path.

On Wed, Nov 20, 2019 at 10:51 PM Arvind Shyamsundar
 wrote:
>
> Hello!
> Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x 
> with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9 
> branch and built with -Dhadoop.profile=3. When I deployed this "custom" 
> Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at 
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I 
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version 
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
> problems with modernizer. Without disabling modernizer, the refactor involved 
> looks non-trivial. I also had issues with outdated interfaces in 
> DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
> FWIW, I pushed my changes here: 
> https://github.com/apache/accumulo/compare/master...arvindshmicrosoft:temp-1.9-guava27.
>
> So my question is: are these known issues with the current 1.9 branch and 
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.


Re: Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-21 Thread Keith Turner
Another possible path to solve this is with a different classpath and
dependency for hadoop 3.  In Accumulo 2.0 we depend on the hadoop
client shaded jar, which has its own shaded and relocated version of
Guava internally.  Using the Hadoop shaded jar would solve this
problem.  Not sure what that change would look like though.  Also, it
leaves Accumulo using an older version of Guava which Hadoop upgraded
away from for security reasons.

On Wed, Nov 20, 2019 at 10:51 PM Arvind Shyamsundar
 wrote:
>
> Hello!
> Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x 
> with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9 
> branch and built with -Dhadoop.profile=3. When I deployed this "custom" 
> Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at 
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I 
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version 
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
> problems with modernizer. Without disabling modernizer, the refactor involved 
> looks non-trivial. I also had issues with outdated interfaces in 
> DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
> FWIW, I pushed my changes here: 
> https://github.com/apache/accumulo/compare/master...arvindshmicrosoft:temp-1.9-guava27.
>
> So my question is: are these known issues with the current 1.9 branch and 
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.


Re: Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-21 Thread Keith Turner
I looked at the history[1] of the hadoop project pom and found that
HADOOP-16213[2] seems to be the cause of this change. So it seems like
we need to bump the guava version if we want to work with newer
versions of Hadoop 3.

One of the goals of 1.9 (and I think 1.10) is to be a bridge version
between hadoop 2 and 3.  Need to determine if there is a good way to
achieve this goal and if this goal is still desired.  If hadoop 2 is
still using an older version of Guava, then maybe we could make
Accumulo's 1.x source build against the new and old versions of Guava
and make the hadoop 3 profile use the newer version of Guava.  Not
sure if this is possible.

The modenizer and java 8 is its own issue.

[1]: 
https://github.com/apache/hadoop/commits/release-3.1.3-RC0/hadoop-project/pom.xml
[2]: https://issues.apache.org/jira/browse/HADOOP-16213

On Wed, Nov 20, 2019 at 10:51 PM Arvind Shyamsundar
 wrote:
>
> Hello!
> Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x 
> with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9 
> branch and built with -Dhadoop.profile=3. When I deployed this "custom" 
> Accumulo build with Hadoop 3.1.3, accumulo init failed:
>
> Caused by: java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at 
> org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:44)
>
> This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 
> is 27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I 
> set about to build 1.9 with Guava 27.0-jre. I had to set the compiler version 
> to 1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
> problems with modernizer. Without disabling modernizer, the refactor involved 
> looks non-trivial. I also had issues with outdated interfaces in 
> DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
> RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
> FWIW, I pushed my changes here: 
> https://github.com/apache/accumulo/compare/master...arvindshmicrosoft:temp-1.9-guava27.
>
> So my question is: are these known issues with the current 1.9 branch and 
> Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?
>
> Thank you.
>
> - Arvind.


Issues building 1.9-snapshot and Hadoop 3.1.3

2019-11-20 Thread Arvind Shyamsundar
Hello!
Per this issue(https://github.com/apache/accumulo/issues/569) building 1.9.x 
with Hadoop 3 support needs hadoop.profile=3. So I checked out current 1.9 
branch and built with -Dhadoop.profile=3. When I deployed this "custom" 
Accumulo build with Hadoop 3.1.3, accumulo init failed:

Caused by: java.lang.NoSuchMethodError: 
com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.conf.Configuration.setInt(Configuration.java:1515)
at 
org.apache.hadoop.hdfs.server.namenode.ha.AbstractNNFailoverProxyProvider.(AbstractNNFailoverProxyProvider.java:70)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.(ConfiguredFailoverProxyProvider.java:44)

This is related to Guava. The version of Guava that is used by Hadoop 3.1.3 is 
27.0-jre while Accumulo 1.9 still depends (and includes) Guava 14.0. So I set 
about to build 1.9 with Guava 27.0-jre. I had to set the compiler version to 
1.8. As Christopher had mentioned to in a the 1.10 thread, I also ran into 
problems with modernizer. Without disabling modernizer, the refactor involved 
looks non-trivial. I also had issues with outdated interfaces in 
DataoutputHasher.java, CloseWriteAheadLogReferences.java, 
RemoveCompleteReplicationRecords.java but those were relatively easy fixes. 
FWIW, I pushed my changes here: 
https://github.com/apache/accumulo/compare/master...arvindshmicrosoft:temp-1.9-guava27.

So my question is: are these known issues with the current 1.9 branch and 
Hadoop? Do we want to support Hadoop 3.1 / 3.2 with Accumulo 1.10?

Thank you.

- Arvind.


Re: [DISCUSS] dropping hadoop 2 support

2018-04-12 Thread Christopher
It seems there's consensus on this, so I'll start moving forward on this
soon.

On Tue, Feb 27, 2018 at 12:40 PM Keith Turner <ke...@deenlo.com> wrote:

> +1
>
> On Tue, Feb 27, 2018 at 10:42 AM, Sean Busbey <bus...@apache.org> wrote:
> > Let's get the discussion started early on when we'll drop hadoop 2
> support.
> >
> > As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3
> supported in 1.y releases as of 1.9.0. That gives an upgrade path so that
> folks won't have to upgrade both Hadoop and Accumulo at the same time.
> >
> > How about Accumulo 2.0.0 requires Hadoop 3?
> >
> > If there's a compelling reason for our users to stay on Hadoop 2.y
> releases, we can keep making Accumulo 1.y releases. Due to the shift away
> from maintenance releases in Hadoop we'll need to get more aggressive in
> adopting minor releases.
>


Re: [DISCUSS] status of Hadoop 3 for 1.9.0 release

2018-03-01 Thread Josh Elser
Yeah, if Hadoop has changed their stance, propagating a "use as your own 
risk" would be sufficient from our end.


On 3/1/18 6:06 PM, Christopher wrote:

If there's a risk, I'd suggest calling things out as "experimental" in the
release notes, and encourage users to try it and give us feedback.

On Thu, Mar 1, 2018 at 5:10 PM Sean Busbey <bus...@apache.org> wrote:


hi folks!

While reviewing things in prep for getting our master branch over to
apache hadoop 3 only (see related discussion [1]), I noticed some wording
on the last RC[2] for Hadoop 3.0.1:


Please note:
* HDFS-12990. Change default NameNode RPC port back to 8020. It makes
incompatible changes to Hadoop 3.0.0. After 3.0.1 releases, Apache
Hadoop 3.0.0 will be deprecated due to this change.


Hadoop 3.0.0 was a production-ready release; the community did an extended
set of alpha/beta releases to shake out the kinds of things that would have
required labeling the X.Y.0 release as non-production in previous Hadoop 2
release lines. Deprecating it is a pretty strong signal, but from the
extended discussion[3] it seems to me that this isn't meant indicate that
the entire 3.0 release line will stop.

What do folks think?

- No problem from our perspective?
- Worth waiting to ship a Hadoop 3 ready release until Hadoop 3.0.1 comes
out?
- Worth waiting to ship a Hadoop 3 ready release until Hadoop 3.1.0 comes
out?
- Leave things as-is and give a word of warning for would-be early
adopters in our release notes?
- Expressly call things out in our release notes as "experimental" and we
might make changes once later Hadoop 3s come out?

[1]: https://s.apache.org/pOKv
[2]: https://s.apache.org/brE4
[3]: https://s.apache.org/BWd6





Re: [DISCUSS] status of Hadoop 3 for 1.9.0 release

2018-03-01 Thread Christopher
If there's a risk, I'd suggest calling things out as "experimental" in the
release notes, and encourage users to try it and give us feedback.

On Thu, Mar 1, 2018 at 5:10 PM Sean Busbey <bus...@apache.org> wrote:

> hi folks!
>
> While reviewing things in prep for getting our master branch over to
> apache hadoop 3 only (see related discussion [1]), I noticed some wording
> on the last RC[2] for Hadoop 3.0.1:
>
> > Please note:
> > * HDFS-12990. Change default NameNode RPC port back to 8020. It makes
> > incompatible changes to Hadoop 3.0.0. After 3.0.1 releases, Apache
> > Hadoop 3.0.0 will be deprecated due to this change.
>
> Hadoop 3.0.0 was a production-ready release; the community did an extended
> set of alpha/beta releases to shake out the kinds of things that would have
> required labeling the X.Y.0 release as non-production in previous Hadoop 2
> release lines. Deprecating it is a pretty strong signal, but from the
> extended discussion[3] it seems to me that this isn't meant indicate that
> the entire 3.0 release line will stop.
>
> What do folks think?
>
> - No problem from our perspective?
> - Worth waiting to ship a Hadoop 3 ready release until Hadoop 3.0.1 comes
> out?
> - Worth waiting to ship a Hadoop 3 ready release until Hadoop 3.1.0 comes
> out?
> - Leave things as-is and give a word of warning for would-be early
> adopters in our release notes?
> - Expressly call things out in our release notes as "experimental" and we
> might make changes once later Hadoop 3s come out?
>
> [1]: https://s.apache.org/pOKv
> [2]: https://s.apache.org/brE4
> [3]: https://s.apache.org/BWd6
>


[DISCUSS] status of Hadoop 3 for 1.9.0 release

2018-03-01 Thread Sean Busbey
hi folks!

While reviewing things in prep for getting our master branch over to apache 
hadoop 3 only (see related discussion [1]), I noticed some wording on the last 
RC[2] for Hadoop 3.0.1:

> Please note:
> * HDFS-12990. Change default NameNode RPC port back to 8020. It makes
> incompatible changes to Hadoop 3.0.0. After 3.0.1 releases, Apache
> Hadoop 3.0.0 will be deprecated due to this change.

Hadoop 3.0.0 was a production-ready release; the community did an extended set 
of alpha/beta releases to shake out the kinds of things that would have 
required labeling the X.Y.0 release as non-production in previous Hadoop 2 
release lines. Deprecating it is a pretty strong signal, but from the extended 
discussion[3] it seems to me that this isn't meant indicate that the entire 3.0 
release line will stop.

What do folks think? 

- No problem from our perspective?
- Worth waiting to ship a Hadoop 3 ready release until Hadoop 3.0.1 comes out?
- Worth waiting to ship a Hadoop 3 ready release until Hadoop 3.1.0 comes out?
- Leave things as-is and give a word of warning for would-be early adopters in 
our release notes?
- Expressly call things out in our release notes as "experimental" and we might 
make changes once later Hadoop 3s come out?

[1]: https://s.apache.org/pOKv
[2]: https://s.apache.org/brE4
[3]: https://s.apache.org/BWd6


Re: [DISCUSS] dropping hadoop 2 support

2018-02-27 Thread Keith Turner
+1

On Tue, Feb 27, 2018 at 10:42 AM, Sean Busbey <bus...@apache.org> wrote:
> Let's get the discussion started early on when we'll drop hadoop 2 support.
>
> As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported in 
> 1.y releases as of 1.9.0. That gives an upgrade path so that folks won't have 
> to upgrade both Hadoop and Accumulo at the same time.
>
> How about Accumulo 2.0.0 requires Hadoop 3?
>
> If there's a compelling reason for our users to stay on Hadoop 2.y releases, 
> we can keep making Accumulo 1.y releases. Due to the shift away from 
> maintenance releases in Hadoop we'll need to get more aggressive in adopting 
> minor releases.


Re: [DISCUSS] dropping hadoop 2 support

2018-02-27 Thread Josh Elser

+1

AFAIK, this wouldn't have to be anything more than build changes. 
"Dropping hadoop2 support" wouldn't need to include any other changes 
(as adding H3 support didn't require any Java changes). Getting in front 
of the ball to help push people towards newer versions would be a 
welcome change.


On 2/27/18 10:42 AM, Sean Busbey wrote:

Let's get the discussion started early on when we'll drop hadoop 2 support.

As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported in 
1.y releases as of 1.9.0. That gives an upgrade path so that folks won't have 
to upgrade both Hadoop and Accumulo at the same time.

How about Accumulo 2.0.0 requires Hadoop 3?

If there's a compelling reason for our users to stay on Hadoop 2.y releases, we 
can keep making Accumulo 1.y releases. Due to the shift away from maintenance 
releases in Hadoop we'll need to get more aggressive in adopting minor releases.



Re: [DISCUSS] dropping hadoop 2 support

2018-02-27 Thread Christopher
+1 I'm in favor of this.

On Tue, Feb 27, 2018 at 10:42 AM Sean Busbey <bus...@apache.org> wrote:

> Let's get the discussion started early on when we'll drop hadoop 2 support.
>
> As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported
> in 1.y releases as of 1.9.0. That gives an upgrade path so that folks won't
> have to upgrade both Hadoop and Accumulo at the same time.
>
> How about Accumulo 2.0.0 requires Hadoop 3?
>
> If there's a compelling reason for our users to stay on Hadoop 2.y
> releases, we can keep making Accumulo 1.y releases. Due to the shift away
> from maintenance releases in Hadoop we'll need to get more aggressive in
> adopting minor releases.
>


[DISCUSS] dropping hadoop 2 support

2018-02-27 Thread Sean Busbey
Let's get the discussion started early on when we'll drop hadoop 2 support.

As of ACCUMULO-4826 we are poised to have Hadoop 2 and Hadoop 3 supported in 
1.y releases as of 1.9.0. That gives an upgrade path so that folks won't have 
to upgrade both Hadoop and Accumulo at the same time.

How about Accumulo 2.0.0 requires Hadoop 3?

If there's a compelling reason for our users to stay on Hadoop 2.y releases, we 
can keep making Accumulo 1.y releases. Due to the shift away from maintenance 
releases in Hadoop we'll need to get more aggressive in adopting minor releases.


Re: [DISCUSS] Hadoop 3 and our dependencies generally

2017-08-03 Thread Mike Drob
There is a 3.0.0-alpha4 release currently available as a non-snapshot
version.

I'm not sure it comes with API stability guarantees at all, IIRC the Hadoop
community is planning on providing that for their betas.

Mike

On Thu, Aug 3, 2017 at 12:17 PM, Christopher <ctubb...@apache.org> wrote:

> +1 from me, too, but I'd like to review what actually changes in master for
> the migration to happen. I don't know much about Hadoop 3. I'm curious what
> the releases will look like (AFAIK, it's only snapshot builds right now; is
> that correct?), how our dependencies will change, and what API stability
> guarantees it offers.
>
> On Thu, Aug 3, 2017 at 12:59 PM Josh Elser <els...@apache.org> wrote:
>
> > +1 sounds like a good idea to me.
> >
> > On 8/3/17 10:08 AM, Sean Busbey wrote:
> > > Hi Folks!
> > >
> > > I think we need to start being more formal in planning for Hadoop 3.
> > > They're up to 3.0.0-alpha4 and are pushing towards API-locking
> betas[1].
> > >
> > > What do folks think about starting to push on an Accumulo 2.0 release
> > that
> > > only supports Hadoop 3? It would let us move faster, which we'll need
> to
> > do
> > > if we want to get any API changes in to the Hadoop 3 line.
> > >
> > > If we get started soon we can probably make it parity on beta/GA status
> > > with the Hadoop 3 line. That would give us a beta for Accumulo Summit
> and
> > > GA by end of the year.
> > >
> > > Going to just Hadoop 3+ would also be a sufficient dependency break
> that
> > we
> > > could do a pass updating any lagging dependencies to latest major
> > version.
> > >
> > >
> > > [1]:
> > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
> > >
> >
>


Re: [DISCUSS] Hadoop 3 and our dependencies generally

2017-08-03 Thread Christopher
+1 from me, too, but I'd like to review what actually changes in master for
the migration to happen. I don't know much about Hadoop 3. I'm curious what
the releases will look like (AFAIK, it's only snapshot builds right now; is
that correct?), how our dependencies will change, and what API stability
guarantees it offers.

On Thu, Aug 3, 2017 at 12:59 PM Josh Elser <els...@apache.org> wrote:

> +1 sounds like a good idea to me.
>
> On 8/3/17 10:08 AM, Sean Busbey wrote:
> > Hi Folks!
> >
> > I think we need to start being more formal in planning for Hadoop 3.
> > They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].
> >
> > What do folks think about starting to push on an Accumulo 2.0 release
> that
> > only supports Hadoop 3? It would let us move faster, which we'll need to
> do
> > if we want to get any API changes in to the Hadoop 3 line.
> >
> > If we get started soon we can probably make it parity on beta/GA status
> > with the Hadoop 3 line. That would give us a beta for Accumulo Summit and
> > GA by end of the year.
> >
> > Going to just Hadoop 3+ would also be a sufficient dependency break that
> we
> > could do a pass updating any lagging dependencies to latest major
> version.
> >
> >
> > [1]:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
> >
>


Re: [DISCUSS] Hadoop 3 and our dependencies generally

2017-08-03 Thread Josh Elser

+1 sounds like a good idea to me.

On 8/3/17 10:08 AM, Sean Busbey wrote:

Hi Folks!

I think we need to start being more formal in planning for Hadoop 3.
They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].

What do folks think about starting to push on an Accumulo 2.0 release that
only supports Hadoop 3? It would let us move faster, which we'll need to do
if we want to get any API changes in to the Hadoop 3 line.

If we get started soon we can probably make it parity on beta/GA status
with the Hadoop 3 line. That would give us a beta for Accumulo Summit and
GA by end of the year.

Going to just Hadoop 3+ would also be a sufficient dependency break that we
could do a pass updating any lagging dependencies to latest major version.


[1]: https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release



Re: [DISCUSS] Hadoop 3 and our dependencies generally

2017-08-03 Thread Keith Turner
I am in favor of going to Hadoop 3 for Accumulo 2.  If we do this then
Accumulo 2 can not release until after Hadoop 3 does.  Any idea when
Hadoop 3 will release?

On Thu, Aug 3, 2017 at 10:08 AM, Sean Busbey <bus...@cloudera.com> wrote:
> Hi Folks!
>
> I think we need to start being more formal in planning for Hadoop 3.
> They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].
>
> What do folks think about starting to push on an Accumulo 2.0 release that
> only supports Hadoop 3? It would let us move faster, which we'll need to do
> if we want to get any API changes in to the Hadoop 3 line.
>
> If we get started soon we can probably make it parity on beta/GA status
> with the Hadoop 3 line. That would give us a beta for Accumulo Summit and
> GA by end of the year.
>
> Going to just Hadoop 3+ would also be a sufficient dependency break that we
> could do a pass updating any lagging dependencies to latest major version.
>
>
> [1]: https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
> --
> busbey


Re: [DISCUSS] Hadoop 3 and our dependencies generally

2017-08-03 Thread Michael Hogue
All,

   For what it's worth, I'd attempted to run both 1.7.x and 1.8.x on Hadoop
3 and ran into a fairly straightforward dependency issue [1] that, when
addressed, should allow current Accumulo versions to run on Hadoop 3.
Hopefully this means that it's not a large lift to get to the point you're
aiming for.

Thanks,
Mike

[1] https://issues.apache.org/jira/browse/ACCUMULO-4611


On Thu, Aug 3, 2017 at 10:09 AM Sean Busbey <bus...@cloudera.com> wrote:

> Hi Folks!
>
> I think we need to start being more formal in planning for Hadoop 3.
> They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].
>
> What do folks think about starting to push on an Accumulo 2.0 release that
> only supports Hadoop 3? It would let us move faster, which we'll need to do
> if we want to get any API changes in to the Hadoop 3 line.
>
> If we get started soon we can probably make it parity on beta/GA status
> with the Hadoop 3 line. That would give us a beta for Accumulo Summit and
> GA by end of the year.
>
> Going to just Hadoop 3+ would also be a sufficient dependency break that we
> could do a pass updating any lagging dependencies to latest major version.
>
>
> [1]:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
> --
> busbey
>


[DISCUSS] Hadoop 3 and our dependencies generally

2017-08-03 Thread Sean Busbey
Hi Folks!

I think we need to start being more formal in planning for Hadoop 3.
They're up to 3.0.0-alpha4 and are pushing towards API-locking betas[1].

What do folks think about starting to push on an Accumulo 2.0 release that
only supports Hadoop 3? It would let us move faster, which we'll need to do
if we want to get any API changes in to the Hadoop 3 line.

If we get started soon we can probably make it parity on beta/GA status
with the Hadoop 3 line. That would give us a beta for Accumulo Summit and
GA by end of the year.

Going to just Hadoop 3+ would also be a sufficient dependency break that we
could do a pass updating any lagging dependencies to latest major version.


[1]: https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
-- 
busbey


Re: Correct configuration for NetBeans to use Accumulo,Hadoop,Zookeeper

2017-05-09 Thread Michael Wall
Hi Guiseppe,

Josh had forwarded your email to the user list at u...@accumulo.apache.org when
he replied to you.  I responded to that email.

Mike

On Tue, May 9, 2017 at 12:29 PM Pino <giusur...@gmail.com> wrote:

> Hello everyone,
> I claim I have never worked with Accumulo,
> I should develop a Java application using NetBeans which in turn uses
> Accumulator, Hadoop and Zookeeper, I wanted to ask you what is the correct
> configuration for doing this with NetBeans, such as what libraries to use
> and more.
> Thanks in advance for using yours time and excuse my english a bit ugly.
>
>
>
> --
> View this message in context:
> http://apache-accumulo.1065345.n5.nabble.com/Correct-configuration-for-NetBeans-to-use-Accumulo-Hadoop-Zookeeper-tp21204.html
> Sent from the Developers mailing list archive at Nabble.com.
>


Correct configuration for NetBeans to use Accumulo,Hadoop,Zookeeper

2017-05-09 Thread Pino
Hello everyone,
I claim I have never worked with Accumulo,
I should develop a Java application using NetBeans which in turn uses
Accumulator, Hadoop and Zookeeper, I wanted to ask you what is the correct
configuration for doing this with NetBeans, such as what libraries to use
and more.
Thanks in advance for using yours time and excuse my english a bit ugly.



--
View this message in context: 
http://apache-accumulo.1065345.n5.nabble.com/Correct-configuration-for-NetBeans-to-use-Accumulo-Hadoop-Zookeeper-tp21204.html
Sent from the Developers mailing list archive at Nabble.com.


Re: How to remove all tables of accumulo or format hadoop files for accumulo

2017-04-28 Thread Dylan Hutchison
For deleting test tables, you may find the flag "-p" for the deletetable
shell command useful.  For example,

deletetable -f -p Test.*


will delete all tables that begin with the prefix "Test".  You could even
call this from a script if you want to automate it into your build.
Perhaps this would solve your problem without having to re-format HDFS.

On Fri, Apr 28, 2017 at 12:43 AM, Suresh Prajapati <
sureshpraja1...@gmail.com> wrote:

> Hello Team
>
> I want to clear all records in accumulo for my local machine and want to
> delete unused tables created while testing. I found delete table command
> which can be used from accumulo shell however that will require much of
> manual works for deleting large number of tables. I also tried instructions
> <https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/
> bk_installing_manually_book/content/format_and_start_hdfs.html>
> to
> format namenode of hadoop but that doesn't seems to work. My question is
> how can I remove all tables and have fresh start with accumulo datastore?
> Any suggestion is welcomed. Tons of thanks in advanced.
>
> Thank You
> Suresh Prajapati
>


[GitHub] accumulo-testing pull request #3: ACCUMULO-4579 Fixed hadoop config bug in a...

2017-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/accumulo-testing/pull/3


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] accumulo-testing pull request #3: ACCUMULO-4579 Fixed hadoop config bug in a...

2017-02-01 Thread mikewalch
Github user mikewalch commented on a diff in the pull request:

https://github.com/apache/accumulo-testing/pull/3#discussion_r98955888
  
--- Diff: core/src/main/java/org/apache/accumulo/testing/core/TestEnv.java 
---
@@ -96,15 +97,22 @@ public String getPid() {
   }
 
   public Configuration getHadoopConfiguration() {
-Configuration config = new Configuration();
-config.set("mapreduce.framework.name", "yarn");
-// Setting below are required due to bundled jar breaking default
-// config.
-// See
-// 
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
-config.set("fs.hdfs.impl", 
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
-config.set("fs.file.impl", 
org.apache.hadoop.fs.LocalFileSystem.class.getName());
-return config;
+if (hadoopConfig == null) {
+  String hadoopPrefix = System.getenv("HADOOP_PREFIX");
+  if (hadoopPrefix == null || hadoopPrefix.isEmpty()) {
+throw new IllegalArgumentException("HADOOP_PREFIX must be sent in 
env");
+  }
+  hadoopConfig = new Configuration();
+  hadoopConfig.addResource(new Path(hadoopPrefix + 
"/etc/hadoop/core-site.xml"));
--- End diff --

I pushed another commit 62e91527 where I created new properties in 
`accumulo-testing.properties` to avoid relying on loading Hadoop config files 
using `HADOOP_PREFIX`.  I also added `accumulo` to several properties that 
configured Accumulo scanners.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] accumulo-testing pull request #3: ACCUMULO-4579 Fixed hadoop config bug in a...

2017-01-31 Thread ctubbsii
Github user ctubbsii commented on a diff in the pull request:

https://github.com/apache/accumulo-testing/pull/3#discussion_r98796795
  
--- Diff: core/src/main/java/org/apache/accumulo/testing/core/TestEnv.java 
---
@@ -96,15 +97,22 @@ public String getPid() {
   }
 
   public Configuration getHadoopConfiguration() {
-Configuration config = new Configuration();
-config.set("mapreduce.framework.name", "yarn");
-// Setting below are required due to bundled jar breaking default
-// config.
-// See
-// 
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
-config.set("fs.hdfs.impl", 
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
-config.set("fs.file.impl", 
org.apache.hadoop.fs.LocalFileSystem.class.getName());
-return config;
+if (hadoopConfig == null) {
+  String hadoopPrefix = System.getenv("HADOOP_PREFIX");
+  if (hadoopPrefix == null || hadoopPrefix.isEmpty()) {
+throw new IllegalArgumentException("HADOOP_PREFIX must be sent in 
env");
+  }
+  hadoopConfig = new Configuration();
+  hadoopConfig.addResource(new Path(hadoopPrefix + 
"/etc/hadoop/core-site.xml"));
--- End diff --

I think using the properties file is better. I don't think we should be 
writing java code to depend on env variables, especially arbitrary script 
conventions like `HADOOP_PREFIX`, which is very sensitive to Hadoop 
packaging/deployment changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] accumulo-testing pull request #3: ACCUMULO-4579 Fixed hadoop config bug in a...

2017-01-30 Thread mikewalch
Github user mikewalch commented on a diff in the pull request:

https://github.com/apache/accumulo-testing/pull/3#discussion_r98556591
  
--- Diff: core/src/main/java/org/apache/accumulo/testing/core/TestEnv.java 
---
@@ -96,15 +97,22 @@ public String getPid() {
   }
 
   public Configuration getHadoopConfiguration() {
-Configuration config = new Configuration();
-config.set("mapreduce.framework.name", "yarn");
-// Setting below are required due to bundled jar breaking default
-// config.
-// See
-// 
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
-config.set("fs.hdfs.impl", 
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
-config.set("fs.file.impl", 
org.apache.hadoop.fs.LocalFileSystem.class.getName());
-return config;
+if (hadoopConfig == null) {
+  String hadoopPrefix = System.getenv("HADOOP_PREFIX");
+  if (hadoopPrefix == null || hadoopPrefix.isEmpty()) {
+throw new IllegalArgumentException("HADOOP_PREFIX must be sent in 
env");
+  }
+  hadoopConfig = new Configuration();
+  hadoopConfig.addResource(new Path(hadoopPrefix + 
"/etc/hadoop/core-site.xml"));
--- End diff --

After talking to @keith-turner offline, I am going to submit an update that 
pull the needed Hadoop configuration from the accumulo-testing.properties file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] accumulo-testing pull request #3: ACCUMULO-4579 Fixed hadoop config bug in a...

2017-01-30 Thread mikewalch
GitHub user mikewalch opened a pull request:

https://github.com/apache/accumulo-testing/pull/3

ACCUMULO-4579 Fixed hadoop config bug in accumulo-testing

* TestEnv now returns Hadoop configuration that is loaded from config files
  but expects HADOOP_PREFIX to be set in environment
* Fixed bug where Twill was being improperly configured to look in wrong
  location for shaded jar when running test application in YARN.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mikewalch/accumulo-testing accumulo-4579

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/accumulo-testing/pull/3.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3


commit dd8cad26c82b7dc4f446873b855fe52d5d87046a
Author: Mike Walch <mwa...@apache.org>
Date:   2017-01-30T20:03:21Z

ACCUMULO-4579 Fixed hadoop config bug in accumulo-testing

* TestEnv now returns Hadoop configuration that is loaded from config files
  but expects HADOOP_PREFIX to be set in environment
* Fixed bug where Twill was being improperly configured to look in wrong
  location for shaded jar when running test application in YARN.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Running Accumulo on a standard file system, without Hadoop

2017-01-27 Thread Christopher
My recent blog post about running Accumulo on Fedora 25 describes how to do
this using the RawLocalFileSystem implementation of Hadoop for Accumulo
volumes matching file://

https://accumulo.apache.org/blog/2016/12/19/running-on-fedora-25.html

This works with other packaging also, not just in Fedora 25, but I think
the step-by-step process in my blog post is probably the simplest way to
get started with that scenario. Currently, only version 1.6.6 on Hadoop
2.4.1 is available, though.

On Mon, Jan 16, 2017 at 3:17 PM Dylan Hutchison <dhutc...@cs.washington.edu>
wrote:

> Hi folks,
>
> A friend of mine asked about running Accumulo on a normal file system in
> place of Hadoop, similar to the way MiniAccumulo runs.  How possible is
> this, or how much work would it take to do so?
>
> I think my friend is just interested in running on a single node, but I am
> curious about both the single-node and distributed (via parallel file
> system like Lustre) cases.
>
> Thanks, Dylan
>
-- 
Christopher


Re: Running Accumulo on a standard file system, without Hadoop

2017-01-17 Thread Keith Turner
On Mon, Jan 16, 2017 at 5:53 PM, Josh Elser <josh.el...@gmail.com> wrote:
>
>
> Dylan Hutchison wrote:
>>>
>>> You can configure HDFS to use the RawLocalFileSystem class forfile://
>>> >  URIs which is what is done for a majority of the integration tests.
>>> > Beware
>>> >  that you configure the RawLocalFileSystem as the ChecksumFileSystem
>>> >  (default forfile://) will fail miserably around WAL recovery.
>>> >
>>> >  https://github.com/apache/accumulo/blob/master/test/src/main
>>> >  /java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61
>>> >
>>> >
>>
>> Hi Josh, are you saying that the ChecksumFileSystem is required or
>> forbidden for WAL recovery?  Looking at the Hadoop code it seems that
>> LocalFileSystem wraps around a RawLocalFileSystem to provide checksum
>> capabilities.  Is that right?
>>
>
> Sorry I wasn't clearer: forbidden. If you use the RawLocalFileSystem and you
> should not see any issues. If you use the ChecksumFileSystem (which is the
> default) and you *will* see issues.

The ChecksumFileSystem does nothing for flush, thats why there are WAL
problems.  The RawLocalFileSystem pushes data to the OS (which may
buffer in memory for a short period), when flush is called.  However,
RawLocalFileSystem does not offer a way to force data to disk.  So
with RawLocalFileSystem you can restart Accumulo processes w/o losing
data.  However, it the OS is restarted then data may be lost.


Re: Running Accumulo on a standard file system, without Hadoop

2017-01-16 Thread Josh Elser



Dylan Hutchison wrote:

You can configure HDFS to use the RawLocalFileSystem class forfile://
>  URIs which is what is done for a majority of the integration tests. Beware
>  that you configure the RawLocalFileSystem as the ChecksumFileSystem
>  (default forfile://) will fail miserably around WAL recovery.
>
>  https://github.com/apache/accumulo/blob/master/test/src/main
>  /java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61
>
>

Hi Josh, are you saying that the ChecksumFileSystem is required or
forbidden for WAL recovery?  Looking at the Hadoop code it seems that
LocalFileSystem wraps around a RawLocalFileSystem to provide checksum
capabilities.  Is that right?



Sorry I wasn't clearer: forbidden. If you use the RawLocalFileSystem and 
you should not see any issues. If you use the ChecksumFileSystem (which 
is the default) and you *will* see issues.


Re: Running Accumulo on a standard file system, without Hadoop

2017-01-16 Thread Dylan Hutchison
On Mon, Jan 16, 2017 at 1:56 PM, Josh Elser <josh.el...@gmail.com> wrote:

> That's true, but HDFS supports multiple "implementations" based on the
> scheme of the URI being used.
>
> e.g. hdfs:// is mapped to DistributedFileSystem
>
> You can configure HDFS to use the RawLocalFileSystem class for file://
> URIs which is what is done for a majority of the integration tests. Beware
> that you configure the RawLocalFileSystem as the ChecksumFileSystem
> (default for file://) will fail miserably around WAL recovery.
>
> https://github.com/apache/accumulo/blob/master/test/src/main
> /java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61
>
>
Hi Josh, are you saying that the ChecksumFileSystem is required or
forbidden for WAL recovery?  Looking at the Hadoop code it seems that
LocalFileSystem wraps around a RawLocalFileSystem to provide checksum
capabilities.  Is that right?


>
> Dave Marion wrote:
>
>> IIRC, Accumulo *only* uses the HDFS client, so it needs something on the
>> other side that can respond to that protocol. MiniAccumulo starts up
>> MiniHDFS for this. You could run some other type of service locally that is
>> HDFS client compatible (something like Quantcast QFS[1], setting up client
>> [2]). If Accumulo is using something in Hadoop outside of the public client
>> API, this may not work.
>>
>> [1] https://github.com/quantcast/qfs
>> [2] https://github.com/quantcast/qfs/wiki/Migration-Guide
>>
>>
>> -Original Message-
>>> From: Dylan Hutchison [mailto:dhutc...@cs.washington.edu]
>>> Sent: Monday, January 16, 2017 3:17 PM
>>> To: dev@accumulo.apache.org
>>> Subject: Running Accumulo on a standard file system, without Hadoop
>>>
>>> Hi folks,
>>>
>>> A friend of mine asked about running Accumulo on a normal file system in
>>> place of Hadoop, similar to the way MiniAccumulo runs.  How possible is
>>> this,
>>> or how much work would it take to do so?
>>>
>>> I think my friend is just interested in running on a single node, but I
>>> am
>>> curious about both the single-node and distributed (via parallel file
>>> system
>>> like Lustre) cases.
>>>
>>> Thanks, Dylan
>>>
>>
>>


Re: Running Accumulo on a standard file system, without Hadoop

2017-01-16 Thread Josh Elser
That's true, but HDFS supports multiple "implementations" based on the 
scheme of the URI being used.


e.g. hdfs:// is mapped to DistributedFileSystem

You can configure HDFS to use the RawLocalFileSystem class for file:// 
URIs which is what is done for a majority of the integration tests. 
Beware that you configure the RawLocalFileSystem as the 
ChecksumFileSystem (default for file://) will fail miserably around WAL 
recovery.


https://github.com/apache/accumulo/blob/master/test/src/main/java/org/apache/accumulo/test/BulkImportVolumeIT.java#L61

Dave Marion wrote:

IIRC, Accumulo *only* uses the HDFS client, so it needs something on the other 
side that can respond to that protocol. MiniAccumulo starts up MiniHDFS for 
this. You could run some other type of service locally that is HDFS client 
compatible (something like Quantcast QFS[1], setting up client [2]). If 
Accumulo is using something in Hadoop outside of the public client API, this 
may not work.

[1] https://github.com/quantcast/qfs
[2] https://github.com/quantcast/qfs/wiki/Migration-Guide



-Original Message-
From: Dylan Hutchison [mailto:dhutc...@cs.washington.edu]
Sent: Monday, January 16, 2017 3:17 PM
To: dev@accumulo.apache.org
Subject: Running Accumulo on a standard file system, without Hadoop

Hi folks,

A friend of mine asked about running Accumulo on a normal file system in
place of Hadoop, similar to the way MiniAccumulo runs.  How possible is this,
or how much work would it take to do so?

I think my friend is just interested in running on a single node, but I am
curious about both the single-node and distributed (via parallel file system
like Lustre) cases.

Thanks, Dylan




RE: Running Accumulo on a standard file system, without Hadoop

2017-01-16 Thread Dave Marion
IIRC, Accumulo *only* uses the HDFS client, so it needs something on the other 
side that can respond to that protocol. MiniAccumulo starts up MiniHDFS for 
this. You could run some other type of service locally that is HDFS client 
compatible (something like Quantcast QFS[1], setting up client [2]). If 
Accumulo is using something in Hadoop outside of the public client API, this 
may not work.

[1] https://github.com/quantcast/qfs
[2] https://github.com/quantcast/qfs/wiki/Migration-Guide


> -Original Message-
> From: Dylan Hutchison [mailto:dhutc...@cs.washington.edu]
> Sent: Monday, January 16, 2017 3:17 PM
> To: dev@accumulo.apache.org
> Subject: Running Accumulo on a standard file system, without Hadoop
> 
> Hi folks,
> 
> A friend of mine asked about running Accumulo on a normal file system in
> place of Hadoop, similar to the way MiniAccumulo runs.  How possible is this,
> or how much work would it take to do so?
> 
> I think my friend is just interested in running on a single node, but I am
> curious about both the single-node and distributed (via parallel file system
> like Lustre) cases.
> 
> Thanks, Dylan



Running Accumulo on a standard file system, without Hadoop

2017-01-16 Thread Dylan Hutchison
Hi folks,

A friend of mine asked about running Accumulo on a normal file system in
place of Hadoop, similar to the way MiniAccumulo runs.  How possible is
this, or how much work would it take to do so?

I think my friend is just interested in running on a single node, but I am
curious about both the single-node and distributed (via parallel file
system like Lustre) cases.

Thanks, Dylan


Re: [DISCUSS] Htrace4, Hadoop 2.7

2016-07-08 Thread Christopher
On Fri, Jul 8, 2016 at 5:05 PM Sean Busbey <bus...@cloudera.com> wrote:

> On Fri, Jul 8, 2016 at 3:40 PM, Christopher <ctubb...@apache.org> wrote:
> > On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey <bus...@cloudera.com> wrote:
> >> Would we be bumping the Hadoop version while incrementing our minor
> >> version number or our major version number?
> >>
> >>
> >>
> > Minor only, because it's not a breaking change necessarily, and it's
> > unrelated to API. It'd still be reasonable for somebody to patch the 1.x
> > version to use the earlier Hadoop/HTrace versions easily.
> >
> > Specifically, I was thinking for 1.8.0. Since H2.8 isn't out yet, that'd
> > mean either no change in 1.8.0, or a change to make it sync up with H2.7.
>
> My only concern would be that updating our listed Hadoop dependency
> version would make it easy for someone to accidentally rely on a
> Hadoop API call that wasn't in earlier versions, which would then make
> it harder for an interested person to patch their 1.y version to use
> the earlier Hadoop version.
>
> HBase checks compilation against different hadoop versions in their
> precommit checks. We could add something like that to our nightly
> builds maybe?
>
> Now that we're discussing it, I can't actually remember if we ever
> documented what version(s) of Hadoop we expect to work with. So maybe
> updating to the latest minor release of 2.y on each Accumulo 1.y minor
> release can just be our new thing.
>
> --
> busbey
>

I don't know that we'd have to update every time... but we can certainly
make it a point to consider prior to releasing.

Personally, I'm okay with newer versions of Accumulo requiring newer
versions of Hadoop, and using newer APIs which don't work on older Hadoops.
What we release is a baseline anyway... if users have specific needs for
specific deployments, they may have to do some
backporting/patching/dependency convergence/integration, and I think that's
okay. We can even try to help them along on the mailing lists when this
occurs. I don't think it's reasonable for us to try to make long-term
guarantees about being able to run on such a wide range of versions of
Hadoop. It's just not tenable to do that sort of thing upstream. We can be
cognizant and helpful, but sometimes it's easier to keep development going
by moving to newer deps.


Re: [DISCUSS] Htrace4, Hadoop 2.7

2016-07-08 Thread Josh Elser

Sean Busbey wrote:

On Fri, Jul 8, 2016 at 3:40 PM, Christopher<ctubb...@apache.org>  wrote:

On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey<bus...@cloudera.com>  wrote:

Would we be bumping the Hadoop version while incrementing our minor
version number or our major version number?




Minor only, because it's not a breaking change necessarily, and it's
unrelated to API. It'd still be reasonable for somebody to patch the 1.x
version to use the earlier Hadoop/HTrace versions easily.

Specifically, I was thinking for 1.8.0. Since H2.8 isn't out yet, that'd
mean either no change in 1.8.0, or a change to make it sync up with H2.7.


My only concern would be that updating our listed Hadoop dependency
version would make it easy for someone to accidentally rely on a
Hadoop API call that wasn't in earlier versions, which would then make
it harder for an interested person to patch their 1.y version to use
the earlier Hadoop version.

HBase checks compilation against different hadoop versions in their
precommit checks. We could add something like that to our nightly
builds maybe?

Now that we're discussing it, I can't actually remember if we ever
documented what version(s) of Hadoop we expect to work with. So maybe
updating to the latest minor release of 2.y on each Accumulo 1.y minor
release can just be our new thing.


It was 2.2.0 for the longest time. I feel like we switched it to 
2.6.something when we ran into some issues with UGI+Kerberos.


1.8.0 seems to be good timing for this (as is likely Christopher's 
reason for bringing it up now).


Re: [DISCUSS] Htrace4, Hadoop 2.7

2016-07-08 Thread Sean Busbey
On Fri, Jul 8, 2016 at 3:40 PM, Christopher <ctubb...@apache.org> wrote:
> On Fri, Jul 8, 2016 at 11:20 AM Sean Busbey <bus...@cloudera.com> wrote:
>> Would we be bumping the Hadoop version while incrementing our minor
>> version number or our major version number?
>>
>>
>>
> Minor only, because it's not a breaking change necessarily, and it's
> unrelated to API. It'd still be reasonable for somebody to patch the 1.x
> version to use the earlier Hadoop/HTrace versions easily.
>
> Specifically, I was thinking for 1.8.0. Since H2.8 isn't out yet, that'd
> mean either no change in 1.8.0, or a change to make it sync up with H2.7.

My only concern would be that updating our listed Hadoop dependency
version would make it easy for someone to accidentally rely on a
Hadoop API call that wasn't in earlier versions, which would then make
it harder for an interested person to patch their 1.y version to use
the earlier Hadoop version.

HBase checks compilation against different hadoop versions in their
precommit checks. We could add something like that to our nightly
builds maybe?

Now that we're discussing it, I can't actually remember if we ever
documented what version(s) of Hadoop we expect to work with. So maybe
updating to the latest minor release of 2.y on each Accumulo 1.y minor
release can just be our new thing.

-- 
busbey


Re: [DISCUSS] Htrace4, Hadoop 2.7

2016-07-07 Thread Christopher
I'm sure I know some people trying to use Accumulo+HDFS tracing, and it's
going to cause a problem no matter what, because Hadoop and Accumulo aren't
always upgraded at the same time. I just want to make sure it gets better
at some point, if both are sufficiently up-to-date.

Backporting patches to support specific users in custom environments isn't
a big deal, I think, so long as those backports don't have to be maintained
indefinitely, and the conflict will be resolved at some point in the
roadmap.

On Thu, Jul 7, 2016 at 6:07 PM Billie Rinaldi <billie.rina...@gmail.com>
wrote:

> Ah, that makes more sense. I would be fine with bumping the htrace
> dependency to match the most recent version of Hadoop that we support and
> not doing a shim layer. We might want to check in with any users who are
> using the Accumulo+HDFS tracing to see if this would be a problem for them.
> I am not sure if anyone is using it or not.
>
> Billie
>
> On Thu, Jul 7, 2016 at 2:42 PM, Christopher <ctubb...@apache.org> wrote:
>
> > Ah, my mistake. I thought it was 2.7 and later. Well, then I guess the
> > question is whether we should bump to 2.8, then. I'm not a fan of the
> shim
> > layer. I'd rather provide support for downstream packagers trying to
> > backport for HTrace3, if anybody ends up requiring that, than provide a
> > shim to preserve use of the older HTrace.
> >
> > On Thu, Jul 7, 2016 at 5:30 PM Billie Rinaldi <billie.rina...@gmail.com>
> > wrote:
> >
> > > I'm in favor of bumping our Hadoop version to 2.7. We are already on
> the
> > > same htrace version as Hadoop 2.7. (The discussion in ACCUMULO-4171 is
> > > relevant to Hadoop 2.8 and later.)
> > >
> > > Billie
> > >
> > > On Thu, Jul 7, 2016 at 2:20 PM, Christopher <ctubb...@apache.org>
> wrote:
> > >
> > > > Thinking about https://issues.apache.org/jira/browse/ACCUMULO-4171,
> > I'm
> > > of
> > > > the opinion that we should probably bump our Hadoop version to 2.7
> and
> > > > HTrace version to what Hadoop is using, to keep them in sync.
> > > >
> > > > Does anybody disagree?
> > > >
> > >
> >
>


Re: [DISCUSS] Htrace4, Hadoop 2.7

2016-07-07 Thread Christopher
Ah, my mistake. I thought it was 2.7 and later. Well, then I guess the
question is whether we should bump to 2.8, then. I'm not a fan of the shim
layer. I'd rather provide support for downstream packagers trying to
backport for HTrace3, if anybody ends up requiring that, than provide a
shim to preserve use of the older HTrace.

On Thu, Jul 7, 2016 at 5:30 PM Billie Rinaldi <billie.rina...@gmail.com>
wrote:

> I'm in favor of bumping our Hadoop version to 2.7. We are already on the
> same htrace version as Hadoop 2.7. (The discussion in ACCUMULO-4171 is
> relevant to Hadoop 2.8 and later.)
>
> Billie
>
> On Thu, Jul 7, 2016 at 2:20 PM, Christopher <ctubb...@apache.org> wrote:
>
> > Thinking about https://issues.apache.org/jira/browse/ACCUMULO-4171, I'm
> of
> > the opinion that we should probably bump our Hadoop version to 2.7 and
> > HTrace version to what Hadoop is using, to keep them in sync.
> >
> > Does anybody disagree?
> >
>


Re: [DISCUSS] Htrace4, Hadoop 2.7

2016-07-07 Thread Billie Rinaldi
I'm in favor of bumping our Hadoop version to 2.7. We are already on the
same htrace version as Hadoop 2.7. (The discussion in ACCUMULO-4171 is
relevant to Hadoop 2.8 and later.)

Billie

On Thu, Jul 7, 2016 at 2:20 PM, Christopher <ctubb...@apache.org> wrote:

> Thinking about https://issues.apache.org/jira/browse/ACCUMULO-4171, I'm of
> the opinion that we should probably bump our Hadoop version to 2.7 and
> HTrace version to what Hadoop is using, to keep them in sync.
>
> Does anybody disagree?
>


[DISCUSS] Htrace4, Hadoop 2.7

2016-07-07 Thread Christopher
Thinking about https://issues.apache.org/jira/browse/ACCUMULO-4171, I'm of
the opinion that we should probably bump our Hadoop version to 2.7 and
HTrace version to what Hadoop is using, to keep them in sync.

Does anybody disagree?


Re: Minimum supported Hadoop?

2016-06-02 Thread Christopher
Yes, I believe so. Good point.

On Fri, Jun 3, 2016, 00:15 Mike Drob <md...@mdrob.com> wrote:

> The ITs pass on certain Java versions, right? We can doc that and iterate
> from there.
> On Thu, Jun 2, 2016 at 11:09 PM Christopher <ctubb...@apache.org> wrote:
>
> > Yeah, me either... but it does raise the question: if we can't provide
> > proper Kerberos support (ITs don't even pass, IIRC) using a dependency
> > older than 2.6.1, how much can we really say 1.7.2 works on those older
> > versions?
> >
> > On Thu, Jun 2, 2016 at 11:50 PM Mike Drob <md...@mdrob.com> wrote:
> >
> > > I would not feel comfortable bumping the min req Hadoop in 1.7.2
> > >
> > > On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org>
> wrote:
> > >
> > > > Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that
> much
> > > > different in terms of support, so I figured go with the minimum we
> can
> > > test
> > > > with. FWIW, this affects 1.7.2 also, but i figured a bump there would
> > be
> > > > more controversial.
> > > >
> > > > On Wed, Jun 1, 2016, 19:22 Josh Elser <josh.el...@gmail.com> wrote:
> > > >
> > > > > For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
> > > > > Hadoop didn't do anything screwy that they shouldn't have in a
> > > > > maintenance release...)
> > > > >
> > > > > I have not looked at deltas between 2.6.1 and 2.6.4
> > > > >
> > > > > Christopher wrote:
> > > > > > I was looking at the recently bumped tickets and noticed
> > > > > > https://issues.apache.org/jira/browse/ACCUMULO-4150
> > > > > >
> > > > > > It seems to me that we may want to make our minimum supported
> > Hadoop
> > > > > > version 2.6.1, at least for the 1.8.0 release.
> > > > > >
> > > > > > That's not to say it won't work with other versions... just that
> > it's
> > > > not
> > > > > > something we're testing for in the latest release, and isn't
> > > > recommended
> > > > > > (and possibly, a downstream packager may need to patch Accumulo
> to
> > > > > support
> > > > > > the older version).
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Minimum supported Hadoop?

2016-06-02 Thread Mike Drob
The ITs pass on certain Java versions, right? We can doc that and iterate
from there.
On Thu, Jun 2, 2016 at 11:09 PM Christopher <ctubb...@apache.org> wrote:

> Yeah, me either... but it does raise the question: if we can't provide
> proper Kerberos support (ITs don't even pass, IIRC) using a dependency
> older than 2.6.1, how much can we really say 1.7.2 works on those older
> versions?
>
> On Thu, Jun 2, 2016 at 11:50 PM Mike Drob <md...@mdrob.com> wrote:
>
> > I would not feel comfortable bumping the min req Hadoop in 1.7.2
> >
> > On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org> wrote:
> >
> > > Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that much
> > > different in terms of support, so I figured go with the minimum we can
> > test
> > > with. FWIW, this affects 1.7.2 also, but i figured a bump there would
> be
> > > more controversial.
> > >
> > > On Wed, Jun 1, 2016, 19:22 Josh Elser <josh.el...@gmail.com> wrote:
> > >
> > > > For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
> > > > Hadoop didn't do anything screwy that they shouldn't have in a
> > > > maintenance release...)
> > > >
> > > > I have not looked at deltas between 2.6.1 and 2.6.4
> > > >
> > > > Christopher wrote:
> > > > > I was looking at the recently bumped tickets and noticed
> > > > > https://issues.apache.org/jira/browse/ACCUMULO-4150
> > > > >
> > > > > It seems to me that we may want to make our minimum supported
> Hadoop
> > > > > version 2.6.1, at least for the 1.8.0 release.
> > > > >
> > > > > That's not to say it won't work with other versions... just that
> it's
> > > not
> > > > > something we're testing for in the latest release, and isn't
> > > recommended
> > > > > (and possibly, a downstream packager may need to patch Accumulo to
> > > > support
> > > > > the older version).
> > > > >
> > > >
> > >
> >
>


Re: Minimum supported Hadoop?

2016-06-02 Thread Christopher
Yeah, me either... but it does raise the question: if we can't provide
proper Kerberos support (ITs don't even pass, IIRC) using a dependency
older than 2.6.1, how much can we really say 1.7.2 works on those older
versions?

On Thu, Jun 2, 2016 at 11:50 PM Mike Drob <md...@mdrob.com> wrote:

> I would not feel comfortable bumping the min req Hadoop in 1.7.2
>
> On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org> wrote:
>
> > Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that much
> > different in terms of support, so I figured go with the minimum we can
> test
> > with. FWIW, this affects 1.7.2 also, but i figured a bump there would be
> > more controversial.
> >
> > On Wed, Jun 1, 2016, 19:22 Josh Elser <josh.el...@gmail.com> wrote:
> >
> > > For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
> > > Hadoop didn't do anything screwy that they shouldn't have in a
> > > maintenance release...)
> > >
> > > I have not looked at deltas between 2.6.1 and 2.6.4
> > >
> > > Christopher wrote:
> > > > I was looking at the recently bumped tickets and noticed
> > > > https://issues.apache.org/jira/browse/ACCUMULO-4150
> > > >
> > > > It seems to me that we may want to make our minimum supported Hadoop
> > > > version 2.6.1, at least for the 1.8.0 release.
> > > >
> > > > That's not to say it won't work with other versions... just that it's
> > not
> > > > something we're testing for in the latest release, and isn't
> > recommended
> > > > (and possibly, a downstream packager may need to patch Accumulo to
> > > support
> > > > the older version).
> > > >
> > >
> >
>


Re: Minimum supported Hadoop?

2016-06-02 Thread Mike Drob
I would not feel comfortable bumping the min req Hadoop in 1.7.2

On Wed, Jun 1, 2016 at 6:39 PM Christopher <ctubb...@apache.org> wrote:

> Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that much
> different in terms of support, so I figured go with the minimum we can test
> with. FWIW, this affects 1.7.2 also, but i figured a bump there would be
> more controversial.
>
> On Wed, Jun 1, 2016, 19:22 Josh Elser <josh.el...@gmail.com> wrote:
>
> > For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
> > Hadoop didn't do anything screwy that they shouldn't have in a
> > maintenance release...)
> >
> > I have not looked at deltas between 2.6.1 and 2.6.4
> >
> > Christopher wrote:
> > > I was looking at the recently bumped tickets and noticed
> > > https://issues.apache.org/jira/browse/ACCUMULO-4150
> > >
> > > It seems to me that we may want to make our minimum supported Hadoop
> > > version 2.6.1, at least for the 1.8.0 release.
> > >
> > > That's not to say it won't work with other versions... just that it's
> not
> > > something we're testing for in the latest release, and isn't
> recommended
> > > (and possibly, a downstream packager may need to patch Accumulo to
> > support
> > > the older version).
> > >
> >
>


Re: Hadoop

2016-06-02 Thread Christopher
Well, I wouldn't be surprised with some issues across a fedup (which is a
way to do an upgrade between Fedora versions), but it should have been
stable with normal/routine yum/dnf upgrades.

Were you using the Fedora-provided packages, or the BigTop ones? Or another
set?

On Thu, Jun 2, 2016 at 5:04 PM Corey Nolet <cjno...@gmail.com> wrote:

> This may not be directly related but I've noticed Hadoop packages have been
> not uninstalling/updating well the past year or so. The last couple times
> I've run fedup, I've had to go back in manually and remove/update a bunch
> of the Hadoop packages like Zookeeper and Parquet.
>
> On Thu, Jun 2, 2016 at 4:59 PM, Christopher <ctubb...@fedoraproject.org>
> wrote:
>
> > That first post was intended for the Fedora developer list. Apologies for
> > sending to the wrong list.
> >
> > If anybody is curious, it seems the Fedora community support around
> Hadoop
> > and Big Data is really dying... the packager for Flume and HTrace has
> > abandoned their efforts to package for Fedora, and now it looks like the
> > Hadoop package maintainer abandoned Hadoop, leaving Accumulo with
> > unsatisfied dependencies. This is actually kind of a sad state of
> affairs,
> > because better packaging downstream could really help users, and expose
> > more ways to improve the upstream products.
> >
> > As it stands, I think there is a disconnect between the upstream
> > communities and the downstream packagers in the Big Data space which
> > includes Accumulo. I would love to see more interest in better packaging
> > for downstream users through these existing downstream packager
> communities
> > (Homebrew, Fedora, Debian, EPEL, Ubuntu, etc.), and I would love to see
> > more volunteers come from these downstream communities to make
> improvements
> > upstream.
> >
> > As an upstream community, I believe the responsibility is for us to reach
> > down first, rather than wait for them to come to us. I've tried to do
> that
> > within Fedora, with the hope that others would follow for the downstream
> > communities they care about. Unfortunately, things haven't turned out how
> > I'd have preferred, but I'm still hopeful. If there is anybody interested
> > in downstream community packaging, let me know if I can help you get
> > started.
> >
> > On Thu, Jun 2, 2016 at 4:28 PM Christopher <ctubb...@fedoraproject.org>
> > wrote:
> >
> > > Sorry, wrong list.
> > >
> > > On Thu, Jun 2, 2016 at 4:20 PM Christopher <ctubb...@fedoraproject.org
> >
> > > wrote:
> > >
> > >> So, it would seem at some point, without me noticing (certainly my
> > fault,
> > >> for not paying attention enough), the Hadoop packages got orphaned
> > and/or
> > >> retired? in Fedora.
> > >>
> > >> This is a big problem for me, because the main package I work on is
> > >> dependent upon Hadoop.
> > >>
> > >> What's the state of Hadoop in Fedora these days? Are there packaging
> > >> problems? Not enough support from upstream Apache community? Missing
> > >> dependencies in Fedora? Not enough time to work on it? No interest
> from
> > >> users?
> > >>
> > >> Whatever the issue is... I'd like to help wherever I can... I'd like
> to
> > >> keep this stuff going.
> > >>
> > >
> >
>


Re: Hadoop

2016-06-02 Thread Corey Nolet
This may not be directly related but I've noticed Hadoop packages have been
not uninstalling/updating well the past year or so. The last couple times
I've run fedup, I've had to go back in manually and remove/update a bunch
of the Hadoop packages like Zookeeper and Parquet.

On Thu, Jun 2, 2016 at 4:59 PM, Christopher <ctubb...@fedoraproject.org>
wrote:

> That first post was intended for the Fedora developer list. Apologies for
> sending to the wrong list.
>
> If anybody is curious, it seems the Fedora community support around Hadoop
> and Big Data is really dying... the packager for Flume and HTrace has
> abandoned their efforts to package for Fedora, and now it looks like the
> Hadoop package maintainer abandoned Hadoop, leaving Accumulo with
> unsatisfied dependencies. This is actually kind of a sad state of affairs,
> because better packaging downstream could really help users, and expose
> more ways to improve the upstream products.
>
> As it stands, I think there is a disconnect between the upstream
> communities and the downstream packagers in the Big Data space which
> includes Accumulo. I would love to see more interest in better packaging
> for downstream users through these existing downstream packager communities
> (Homebrew, Fedora, Debian, EPEL, Ubuntu, etc.), and I would love to see
> more volunteers come from these downstream communities to make improvements
> upstream.
>
> As an upstream community, I believe the responsibility is for us to reach
> down first, rather than wait for them to come to us. I've tried to do that
> within Fedora, with the hope that others would follow for the downstream
> communities they care about. Unfortunately, things haven't turned out how
> I'd have preferred, but I'm still hopeful. If there is anybody interested
> in downstream community packaging, let me know if I can help you get
> started.
>
> On Thu, Jun 2, 2016 at 4:28 PM Christopher <ctubb...@fedoraproject.org>
> wrote:
>
> > Sorry, wrong list.
> >
> > On Thu, Jun 2, 2016 at 4:20 PM Christopher <ctubb...@fedoraproject.org>
> > wrote:
> >
> >> So, it would seem at some point, without me noticing (certainly my
> fault,
> >> for not paying attention enough), the Hadoop packages got orphaned
> and/or
> >> retired? in Fedora.
> >>
> >> This is a big problem for me, because the main package I work on is
> >> dependent upon Hadoop.
> >>
> >> What's the state of Hadoop in Fedora these days? Are there packaging
> >> problems? Not enough support from upstream Apache community? Missing
> >> dependencies in Fedora? Not enough time to work on it? No interest from
> >> users?
> >>
> >> Whatever the issue is... I'd like to help wherever I can... I'd like to
> >> keep this stuff going.
> >>
> >
>


Re: Hadoop

2016-06-02 Thread Christopher
That first post was intended for the Fedora developer list. Apologies for
sending to the wrong list.

If anybody is curious, it seems the Fedora community support around Hadoop
and Big Data is really dying... the packager for Flume and HTrace has
abandoned their efforts to package for Fedora, and now it looks like the
Hadoop package maintainer abandoned Hadoop, leaving Accumulo with
unsatisfied dependencies. This is actually kind of a sad state of affairs,
because better packaging downstream could really help users, and expose
more ways to improve the upstream products.

As it stands, I think there is a disconnect between the upstream
communities and the downstream packagers in the Big Data space which
includes Accumulo. I would love to see more interest in better packaging
for downstream users through these existing downstream packager communities
(Homebrew, Fedora, Debian, EPEL, Ubuntu, etc.), and I would love to see
more volunteers come from these downstream communities to make improvements
upstream.

As an upstream community, I believe the responsibility is for us to reach
down first, rather than wait for them to come to us. I've tried to do that
within Fedora, with the hope that others would follow for the downstream
communities they care about. Unfortunately, things haven't turned out how
I'd have preferred, but I'm still hopeful. If there is anybody interested
in downstream community packaging, let me know if I can help you get
started.

On Thu, Jun 2, 2016 at 4:28 PM Christopher <ctubb...@fedoraproject.org>
wrote:

> Sorry, wrong list.
>
> On Thu, Jun 2, 2016 at 4:20 PM Christopher <ctubb...@fedoraproject.org>
> wrote:
>
>> So, it would seem at some point, without me noticing (certainly my fault,
>> for not paying attention enough), the Hadoop packages got orphaned and/or
>> retired? in Fedora.
>>
>> This is a big problem for me, because the main package I work on is
>> dependent upon Hadoop.
>>
>> What's the state of Hadoop in Fedora these days? Are there packaging
>> problems? Not enough support from upstream Apache community? Missing
>> dependencies in Fedora? Not enough time to work on it? No interest from
>> users?
>>
>> Whatever the issue is... I'd like to help wherever I can... I'd like to
>> keep this stuff going.
>>
>


Re: Hadoop

2016-06-02 Thread Christopher
Sorry, wrong list.

On Thu, Jun 2, 2016 at 4:20 PM Christopher <ctubb...@fedoraproject.org>
wrote:

> So, it would seem at some point, without me noticing (certainly my fault,
> for not paying attention enough), the Hadoop packages got orphaned and/or
> retired? in Fedora.
>
> This is a big problem for me, because the main package I work on is
> dependent upon Hadoop.
>
> What's the state of Hadoop in Fedora these days? Are there packaging
> problems? Not enough support from upstream Apache community? Missing
> dependencies in Fedora? Not enough time to work on it? No interest from
> users?
>
> Whatever the issue is... I'd like to help wherever I can... I'd like to
> keep this stuff going.
>


Hadoop

2016-06-02 Thread Christopher
So, it would seem at some point, without me noticing (certainly my fault,
for not paying attention enough), the Hadoop packages got orphaned and/or
retired? in Fedora.

This is a big problem for me, because the main package I work on is
dependent upon Hadoop.

What's the state of Hadoop in Fedora these days? Are there packaging
problems? Not enough support from upstream Apache community? Missing
dependencies in Fedora? Not enough time to work on it? No interest from
users?

Whatever the issue is... I'd like to help wherever I can... I'd like to
keep this stuff going.


Re: Minimum supported Hadoop?

2016-06-01 Thread Christopher
Perhaps. But the tests pass with 2.6.1, I think. Shouldn't be that much
different in terms of support, so I figured go with the minimum we can test
with. FWIW, this affects 1.7.2 also, but i figured a bump there would be
more controversial.

On Wed, Jun 1, 2016, 19:22 Josh Elser <josh.el...@gmail.com> wrote:

> For that reasoning, wouldn't bumping to 2.6.4 be better (as long as
> Hadoop didn't do anything screwy that they shouldn't have in a
> maintenance release...)
>
> I have not looked at deltas between 2.6.1 and 2.6.4
>
> Christopher wrote:
> > I was looking at the recently bumped tickets and noticed
> > https://issues.apache.org/jira/browse/ACCUMULO-4150
> >
> > It seems to me that we may want to make our minimum supported Hadoop
> > version 2.6.1, at least for the 1.8.0 release.
> >
> > That's not to say it won't work with other versions... just that it's not
> > something we're testing for in the latest release, and isn't recommended
> > (and possibly, a downstream packager may need to patch Accumulo to
> support
> > the older version).
> >
>


Re: Minimum supported Hadoop?

2016-06-01 Thread Josh Elser
For that reasoning, wouldn't bumping to 2.6.4 be better (as long as 
Hadoop didn't do anything screwy that they shouldn't have in a 
maintenance release...)


I have not looked at deltas between 2.6.1 and 2.6.4

Christopher wrote:

I was looking at the recently bumped tickets and noticed
https://issues.apache.org/jira/browse/ACCUMULO-4150

It seems to me that we may want to make our minimum supported Hadoop
version 2.6.1, at least for the 1.8.0 release.

That's not to say it won't work with other versions... just that it's not
something we're testing for in the latest release, and isn't recommended
(and possibly, a downstream packager may need to patch Accumulo to support
the older version).



Minimum supported Hadoop?

2016-06-01 Thread Christopher
I was looking at the recently bumped tickets and noticed
https://issues.apache.org/jira/browse/ACCUMULO-4150

It seems to me that we may want to make our minimum supported Hadoop
version 2.6.1, at least for the 1.8.0 release.

That's not to say it won't work with other versions... just that it's not
something we're testing for in the latest release, and isn't recommended
(and possibly, a downstream packager may need to patch Accumulo to support
the older version).


Re: Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Josh Elser
Looks like I had just done an upgrade on the box with bumped some JDK 
versions but I didn't update Jenkins to point at the new installations. 
Should be taken care of now.


Christopher wrote:

Should be fixed now. Looks like I had already done this with the 1.8 branch
at one point, but not in the old ones.

On Fri, May 27, 2016 at 11:58 AM Josh Elser<josh.el...@gmail.com>  wrote:


Thanks :)

Christopher wrote:

Oh, crap, I messed up jar sealing (didn't notice because we normally skip
tests during a release, and we previously only sealed jars during a
release). I will fix that later today.

On Fri, May 27, 2016 at 3:05 AM Christopher<ctubb...@apache.org>   wrote:


Hmm. I tested all of them. The PIAB builds were already failing... but
I'll look at it later today.

On Fri, May 27, 2016 at 2:13 AM Josh Elser<josh.el...@gmail.com>

wrote:

Correction, all of 1.6 and 1.7 appear busted after Christopher's asf

pom

version update.
On May 26, 2016 11:11 PM, "Josh Elser"<josh.el...@gmail.com>   wrote:


Looks like hadoop-1 is still having problems on 1.6.
-- Forwarded message --
From:<els...@apache.org>
Date: May 26, 2016 8:40 PM
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
To:<josh.el...@gmail.com>
Cc:

Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:

Check console output at


https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/

to view the results.





Re: Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Christopher
Should be fixed now. Looks like I had already done this with the 1.8 branch
at one point, but not in the old ones.

On Fri, May 27, 2016 at 11:58 AM Josh Elser <josh.el...@gmail.com> wrote:

> Thanks :)
>
> Christopher wrote:
> > Oh, crap, I messed up jar sealing (didn't notice because we normally skip
> > tests during a release, and we previously only sealed jars during a
> > release). I will fix that later today.
> >
> > On Fri, May 27, 2016 at 3:05 AM Christopher<ctubb...@apache.org>  wrote:
> >
> >> Hmm. I tested all of them. The PIAB builds were already failing... but
> >> I'll look at it later today.
> >>
> >> On Fri, May 27, 2016 at 2:13 AM Josh Elser<josh.el...@gmail.com>
> wrote:
> >>
> >>> Correction, all of 1.6 and 1.7 appear busted after Christopher's asf
> pom
> >>> version update.
> >>> On May 26, 2016 11:11 PM, "Josh Elser"<josh.el...@gmail.com>  wrote:
> >>>
> >>>> Looks like hadoop-1 is still having problems on 1.6.
> >>>> -- Forwarded message --
> >>>> From:<els...@apache.org>
> >>>> Date: May 26, 2016 8:40 PM
> >>>> Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
> >>>> To:<josh.el...@gmail.com>
> >>>> Cc:
> >>>>
> >>>> Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:
> >>>>
> >>>> Check console output at
> >>>>
> >>>
> https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/
> >>>> to view the results.
> >>>>
> >
>


Re: Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Josh Elser

Thanks :)

Christopher wrote:

Oh, crap, I messed up jar sealing (didn't notice because we normally skip
tests during a release, and we previously only sealed jars during a
release). I will fix that later today.

On Fri, May 27, 2016 at 3:05 AM Christopher<ctubb...@apache.org>  wrote:


Hmm. I tested all of them. The PIAB builds were already failing... but
I'll look at it later today.

On Fri, May 27, 2016 at 2:13 AM Josh Elser<josh.el...@gmail.com>  wrote:


Correction, all of 1.6 and 1.7 appear busted after Christopher's asf pom
version update.
On May 26, 2016 11:11 PM, "Josh Elser"<josh.el...@gmail.com>  wrote:


Looks like hadoop-1 is still having problems on 1.6.
-- Forwarded message --
From:<els...@apache.org>
Date: May 26, 2016 8:40 PM
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
To:<josh.el...@gmail.com>
Cc:

Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:

Check console output at


https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/

to view the results.





Re: Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Christopher
Oh, crap, I messed up jar sealing (didn't notice because we normally skip
tests during a release, and we previously only sealed jars during a
release). I will fix that later today.

On Fri, May 27, 2016 at 3:05 AM Christopher <ctubb...@apache.org> wrote:

> Hmm. I tested all of them. The PIAB builds were already failing... but
> I'll look at it later today.
>
> On Fri, May 27, 2016 at 2:13 AM Josh Elser <josh.el...@gmail.com> wrote:
>
>> Correction, all of 1.6 and 1.7 appear busted after Christopher's asf pom
>> version update.
>> On May 26, 2016 11:11 PM, "Josh Elser" <josh.el...@gmail.com> wrote:
>>
>> > Looks like hadoop-1 is still having problems on 1.6.
>> > -- Forwarded message --
>> > From: <els...@apache.org>
>> > Date: May 26, 2016 8:40 PM
>> > Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
>> > To: <josh.el...@gmail.com>
>> > Cc:
>> >
>> > Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:
>> >
>> > Check console output at
>> >
>> https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/
>> > to view the results.
>> >
>>
>


Re: Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Christopher
Hmm. I tested all of them. The PIAB builds were already failing... but I'll
look at it later today.

On Fri, May 27, 2016 at 2:13 AM Josh Elser <josh.el...@gmail.com> wrote:

> Correction, all of 1.6 and 1.7 appear busted after Christopher's asf pom
> version update.
> On May 26, 2016 11:11 PM, "Josh Elser" <josh.el...@gmail.com> wrote:
>
> > Looks like hadoop-1 is still having problems on 1.6.
> > -- Forwarded message --
> > From: <els...@apache.org>
> > Date: May 26, 2016 8:40 PM
> > Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
> > To: <josh.el...@gmail.com>
> > Cc:
> >
> > Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:
> >
> > Check console output at
> >
> https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/
> > to view the results.
> >
>


Re: Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Josh Elser
Correction, all of 1.6 and 1.7 appear busted after Christopher's asf pom
version update.
On May 26, 2016 11:11 PM, "Josh Elser" <josh.el...@gmail.com> wrote:

> Looks like hadoop-1 is still having problems on 1.6.
> -- Forwarded message --
> From: <els...@apache.org>
> Date: May 26, 2016 8:40 PM
> Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
> To: <josh.el...@gmail.com>
> Cc:
>
> Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:
>
> Check console output at
> https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/
> to view the results.
>


Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!

2016-05-27 Thread Josh Elser
Looks like hadoop-1 is still having problems on 1.6.
-- Forwarded message --
From: <els...@apache.org>
Date: May 26, 2016 8:40 PM
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure!
To: <josh.el...@gmail.com>
Cc:

Accumulo-Test-1.6-Hadoop-1 - Build # 1032 - Failure:

Check console output at
https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1032/
to view the results.


Fwd: Accumulo-Test-1.6-Hadoop-1 - Build # 1029 - Unstable!

2016-05-20 Thread Josh Elser
Looks like the 2.1 upgrade failed on 1.6 with Hadoop-1. Did you happen 
to notice this, Dave?


 Original Message 
Subject: Accumulo-Test-1.6-Hadoop-1 - Build # 1029 - Unstable!
Date: Fri, 20 May 2016 17:29:15 + (UTC)
From: els...@apache.org
To: josh.el...@gmail.com

Accumulo-Test-1.6-Hadoop-1 - Build # 1029 - Unstable:

Check console output at 
https://secure.penguinsinabox.com/jenkins/job/Accumulo-Test-1.6-Hadoop-1/1029/ 
to view the results.


Re: Accumulo folks at Hadoop Summit San Jose

2016-05-19 Thread Eric Newton
I may be in the area that week.  If I am, I'll try go get down there.

On Thu, May 19, 2016 at 3:25 PM, Josh Elser <josh.el...@gmail.com> wrote:

> Try to make the meetup that Billie is setting up and be sure to introduce
> yourself :)
>
> I'll be there this year (if that wasn't obvious by me asking)
>
>
> Claudia Rose wrote:
>
>> I'll be there although I don't know the other "folks" yet.
>>
>> -Original Message-
>> From: Josh Elser [mailto:josh.el...@gmail.com]
>> Sent: Thursday, May 19, 2016 11:01 AM
>> To: dev
>> Cc: u...@accumulo.apache.org
>> Subject: Accumulo folks at Hadoop Summit San Jose
>>
>> Out of curiosity, are there going to be any Accumulo-folks at Hadoop
>> Summit in San Jose, CA at the end of June?
>>
>> - Josh
>>
>>


Re: Accumulo folks at Hadoop Summit San Jose

2016-05-19 Thread Josh Elser
Try to make the meetup that Billie is setting up and be sure to 
introduce yourself :)


I'll be there this year (if that wasn't obvious by me asking)

Claudia Rose wrote:

I'll be there although I don't know the other "folks" yet.

-Original Message-
From: Josh Elser [mailto:josh.el...@gmail.com]
Sent: Thursday, May 19, 2016 11:01 AM
To: dev
Cc: u...@accumulo.apache.org
Subject: Accumulo folks at Hadoop Summit San Jose

Out of curiosity, are there going to be any Accumulo-folks at Hadoop Summit in 
San Jose, CA at the end of June?

- Josh



RE: Accumulo folks at Hadoop Summit San Jose

2016-05-19 Thread Claudia Rose
I'll be there although I don't know the other "folks" yet.

-Original Message-
From: Josh Elser [mailto:josh.el...@gmail.com] 
Sent: Thursday, May 19, 2016 11:01 AM
To: dev
Cc: u...@accumulo.apache.org
Subject: Accumulo folks at Hadoop Summit San Jose

Out of curiosity, are there going to be any Accumulo-folks at Hadoop Summit in 
San Jose, CA at the end of June?

- Josh



Re: Accumulo folks at Hadoop Summit San Jose

2016-05-19 Thread Billie Rinaldi
I'll be there! I'm looking into getting space for an Accumulo meetup
(details TBD).

On Thu, May 19, 2016 at 11:01 AM, Josh Elser <josh.el...@gmail.com> wrote:

> Out of curiosity, are there going to be any Accumulo-folks at Hadoop
> Summit in San Jose, CA at the end of June?
>
> - Josh
>


Re: Accumulo folks at Hadoop Summit San Jose

2016-05-19 Thread Adam Fuchs
I'll be there.

Adam

On Thu, May 19, 2016 at 11:01 AM, Josh Elser <josh.el...@gmail.com> wrote:

> Out of curiosity, are there going to be any Accumulo-folks at Hadoop
> Summit in San Jose, CA at the end of June?
>
> - Josh
>


Accumulo folks at Hadoop Summit San Jose

2016-05-19 Thread Josh Elser
Out of curiosity, are there going to be any Accumulo-folks at Hadoop 
Summit in San Jose, CA at the end of June?


- Josh


Re: Hadoop Summit 2015 Talk

2015-02-13 Thread THORMAN, ROBERT D
+3

v/r
Bob Thorman
Principal Big Data Engineer
ATT Big Data CoE
2900 W. Plano Parkway
Plano, TX 75075
972-658-1714






On 2/12/15, 11:13 PM, Josh Elser josh.el...@gmail.com wrote:

FYI -- Billie and I have submitted a talk to Hadoop Summit 2015 in San
Jose, CA in June.

http://hadoopsummit.uservoice.com/forums/283260-committer-track/suggestion
s/7073993-a-year-in-the-life-of-apache-accumulo

I'd be overjoyed if anyone would vote for the talk if they'd like to see
it happen. Thanks!

- Josh



Hadoop Summit 2015 Talk

2015-02-12 Thread Josh Elser
FYI -- Billie and I have submitted a talk to Hadoop Summit 2015 in San 
Jose, CA in June.


http://hadoopsummit.uservoice.com/forums/283260-committer-track/suggestions/7073993-a-year-in-the-life-of-apache-accumulo

I'd be overjoyed if anyone would vote for the talk if they'd like to see 
it happen. Thanks!


- Josh


Fwd: [jira] [Commented] (ACCUMULO-1817) Use Hadoop Metrics2

2014-12-04 Thread Josh Elser

Bringing this out of the normal JIRA notification spam.

Your opinions are appreciated.

 Original Message 
Subject: [jira] [Commented] (ACCUMULO-1817) Use Hadoop Metrics2
Date: Wed, 3 Dec 2014 17:12:13 + (UTC)
From: Josh Elser (JIRA) j...@apache.org
Reply-To: j...@apache.org
To: notificati...@accumulo.apache.org


[ 
https://issues.apache.org/jira/browse/ACCUMULO-1817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14233197#comment-14233197 
]


Josh Elser commented on ACCUMULO-1817:
--

One big question that needs to be addressed here is what to do with the 
existing accumulo-metrics.xml file. Metrics2 expects a 
hadoop-metrics2*.properties file to be configure the sources (emitters 
from Accumulo) and sinks (file-based logging, push to Ganglia, etc).


I think, long term, it would be better to solely move to the properties 
file (standardize on what's going on upstream), but that would cause a 
little bit of pain for users who expect to configure/control metrics via 
accumulo-metrics.xml.


Thoughts anyone?


Use Hadoop Metrics2
---

Key: ACCUMULO-1817
URL: https://issues.apache.org/jira/browse/ACCUMULO-1817
Project: Accumulo
 Issue Type: New Feature
   Reporter: Corey J. Nolet
   Assignee: Billie Rinaldi
 Labels: proposed
Fix For: 1.7.0

 Time Spent: 0.5h
 Remaining Estimate: 0h

We currently expose JMX and it's possible (with external code) to bridge the 
JMX to solutions like Ganglia. It would be ideal if the integration were native 
and pluggable.
Turns out that Hadoop (hdfs, mapred) and HBase has direct metrics reporting 
to Ganglia through some nice code provided in Hadoop.
Look into the GangliaContext to see if we can implement Ganglia metrics 
reporting by Accumulo configuration alone.
References: http://wiki.apache.org/hadoop/GangliaMetrics, 
http://hbase.apache.org/metrics.html




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Dropping Hadoop 1 for 1.7.0

2014-11-24 Thread Christopher
Did we make a decision on this?


--
Christopher L Tubbs II
http://gravatar.com/ctubbsii

On Sun, Nov 16, 2014 at 11:16 PM, Christopher ctubb...@apache.org wrote:

 On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com wrote:

 Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
 anymore.

 If we haven't decided to do so already, I'd like to suggest doing so.

 - Josh


 I thought we had discussed this for 2.0.0, but that was when we thought we
 might release a 2.0 instead of a 1.7. I'm okay with doing it in 1.7, but
 with some minor reservations. Doing this will mean that the last supported
 Accumulo line for Hadoop 1 users is 1.6.x. This will mean that if we want
 to support users on Hadoop 1, we'll have to maintain the 1.6.x branch
 longer, vs. maintaining support with 1.x (1.7, maybe 1.8, etc.)

 So, doing this will involve a choice in the future: continue maintaining
 1.6 for perhaps longer than we might have otherwise done, or leave those
 users in the dust (if there are any).

 --
 Christopher L Tubbs II
 http://gravatar.com/ctubbsii




Re: Dropping Hadoop 1 for 1.7.0

2014-11-24 Thread Sean Busbey
I think a little over a week is a fair window and AFAICT we have lazy
consensus to drop it.

-Sean


On Mon, Nov 24, 2014 at 12:07 PM, Christopher ctubb...@apache.org wrote:

 Did we make a decision on this?


 --
 Christopher L Tubbs II
 http://gravatar.com/ctubbsii

 On Sun, Nov 16, 2014 at 11:16 PM, Christopher ctubb...@apache.org wrote:

  On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com
 wrote:
 
  Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
  anymore.
 
  If we haven't decided to do so already, I'd like to suggest doing so.
 
  - Josh
 
 
  I thought we had discussed this for 2.0.0, but that was when we thought
 we
  might release a 2.0 instead of a 1.7. I'm okay with doing it in 1.7, but
  with some minor reservations. Doing this will mean that the last
 supported
  Accumulo line for Hadoop 1 users is 1.6.x. This will mean that if we want
  to support users on Hadoop 1, we'll have to maintain the 1.6.x branch
  longer, vs. maintaining support with 1.x (1.7, maybe 1.8, etc.)
 
  So, doing this will involve a choice in the future: continue maintaining
  1.6 for perhaps longer than we might have otherwise done, or leave those
  users in the dust (if there are any).
 
  --
  Christopher L Tubbs II
  http://gravatar.com/ctubbsii
 
 




-- 
Sean


Re: Dropping Hadoop 1 for 1.7.0

2014-11-24 Thread Josh Elser

Ya -- I've been holding this in my back pocket.

I'll open up an issue on JIRA later today if we don't already have one.

Sean Busbey wrote:

I think a little over a week is a fair window and AFAICT we have lazy
consensus to drop it.

-Sean


On Mon, Nov 24, 2014 at 12:07 PM, Christopher ctubb...@apache.org
mailto:ctubb...@apache.org wrote:

Did we make a decision on this?


--
Christopher L Tubbs II
http://gravatar.com/ctubbsii

On Sun, Nov 16, 2014 at 11:16 PM, Christopher ctubb...@apache.org
mailto:ctubb...@apache.org wrote:

  On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com
mailto:josh.el...@gmail.com wrote:
 
  Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
  anymore.
 
  If we haven't decided to do so already, I'd like to suggest
doing so.
 
  - Josh
 
 
  I thought we had discussed this for 2.0.0, but that was when we
thought we
  might release a 2.0 instead of a 1.7. I'm okay with doing it in
1.7, but
  with some minor reservations. Doing this will mean that the last
supported
  Accumulo line for Hadoop 1 users is 1.6.x. This will mean that if
we want
  to support users on Hadoop 1, we'll have to maintain the 1.6.x branch
  longer, vs. maintaining support with 1.x (1.7, maybe 1.8, etc.)
 
  So, doing this will involve a choice in the future: continue
maintaining
  1.6 for perhaps longer than we might have otherwise done, or
leave those
  users in the dust (if there are any).
 
  --
  Christopher L Tubbs II
  http://gravatar.com/ctubbsii
 
 




--
Sean


Re: Dropping Hadoop 1 for 1.7.0

2014-11-24 Thread Christopher
Okay.


--
Christopher L Tubbs II
http://gravatar.com/ctubbsii

On Mon, Nov 24, 2014 at 1:40 PM, Josh Elser josh.el...@gmail.com wrote:

 Ya -- I've been holding this in my back pocket.

 I'll open up an issue on JIRA later today if we don't already have one.

 Sean Busbey wrote:

 I think a little over a week is a fair window and AFAICT we have lazy
 consensus to drop it.

 -Sean


 On Mon, Nov 24, 2014 at 12:07 PM, Christopher ctubb...@apache.org
 mailto:ctubb...@apache.org wrote:

 Did we make a decision on this?


 --
 Christopher L Tubbs II
 http://gravatar.com/ctubbsii

 On Sun, Nov 16, 2014 at 11:16 PM, Christopher ctubb...@apache.org
 mailto:ctubb...@apache.org wrote:

   On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com
 mailto:josh.el...@gmail.com wrote:
  
   Have we already decided to drop Hadoop 1 for 1.7.0? I can't
 remember
   anymore.
  
   If we haven't decided to do so already, I'd like to suggest
 doing so.
  
   - Josh
  
  
   I thought we had discussed this for 2.0.0, but that was when we
 thought we
   might release a 2.0 instead of a 1.7. I'm okay with doing it in
 1.7, but
   with some minor reservations. Doing this will mean that the last
 supported
   Accumulo line for Hadoop 1 users is 1.6.x. This will mean that if
 we want
   to support users on Hadoop 1, we'll have to maintain the 1.6.x
 branch
   longer, vs. maintaining support with 1.x (1.7, maybe 1.8, etc.)
  
   So, doing this will involve a choice in the future: continue
 maintaining
   1.6 for perhaps longer than we might have otherwise done, or
 leave those
   users in the dust (if there are any).
  
   --
   Christopher L Tubbs II
   http://gravatar.com/ctubbsii
  
  




 --
 Sean




Re: Dropping Hadoop 1 for 1.7.0

2014-11-16 Thread Sean Busbey
I'd be +1 to drop Hadoop 1 from 1.7.

IIRC, we had loose consensus to drop it for 2.0. I don't know how much of
that support was because 2.0 is a very major version change.

It's worth keeping in mind that jdk6 support is also dropped as of the 1.7
release, which means for some people making the jump from 1.6 will now
requite 2 major underlying component changes.

We probably should have a vote (and copy user@) just to make sure it gets
enough visibility.

-- 
Sean
Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
anymore.

If we haven't decided to do so already, I'd like to suggest doing so.

- Josh


Re: Dropping Hadoop 1 for 1.7.0

2014-11-16 Thread Josh Elser

+cc user@

I don't believe a vote is necessary unless anyone actually has an 
objection. Let's have a discussion and if there are any concerns by devs 
at the end, we can call a vote.


Anyone should feel free to state their opinion on the subject.

Sean Busbey wrote:

I'd be +1 to drop Hadoop 1 from 1.7.

IIRC, we had loose consensus to drop it for 2.0. I don't know how much of
that support was because 2.0 is a very major version change.

It's worth keeping in mind that jdk6 support is also dropped as of the 1.7
release, which means for some people making the jump from 1.6 will now
requite 2 major underlying component changes.

We probably should have a vote (and copy user@) just to make sure it gets
enough visibility.


 Josh Elser wrote:
 Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
 anymore.

 If we haven't decided to do so already, I'd like to suggest doing so.

 - Josh


Re: Dropping Hadoop 1 for 1.7.0

2014-11-16 Thread Christopher
On Sun, Nov 16, 2014 at 8:21 PM, Josh Elser josh.el...@gmail.com wrote:

 Have we already decided to drop Hadoop 1 for 1.7.0? I can't remember
 anymore.

 If we haven't decided to do so already, I'd like to suggest doing so.

 - Josh


I thought we had discussed this for 2.0.0, but that was when we thought we
might release a 2.0 instead of a 1.7. I'm okay with doing it in 1.7, but
with some minor reservations. Doing this will mean that the last supported
Accumulo line for Hadoop 1 users is 1.6.x. This will mean that if we want
to support users on Hadoop 1, we'll have to maintain the 1.6.x branch
longer, vs. maintaining support with 1.x (1.7, maybe 1.8, etc.)

So, doing this will involve a choice in the future: continue maintaining
1.6 for perhaps longer than we might have otherwise done, or leave those
users in the dust (if there are any).

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-16 Thread Mike Drob

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/#review42605
---



core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
https://reviews.apache.org/r/21282/#comment76421

Need to specify UTF8?



core/src/main/java/org/apache/accumulo/core/util/Base64.java
https://reviews.apache.org/r/21282/#comment76422

Make it clear that we are refering to versions of commons-codec here, not 
Accumulo.



core/src/main/java/org/apache/accumulo/core/util/Base64.java
https://reviews.apache.org/r/21282/#comment76424

Make class final.



core/src/main/java/org/apache/accumulo/core/util/Base64.java
https://reviews.apache.org/r/21282/#comment76423

Add comment to private constructor; I think findbugs will complain 
otherwise.



core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
https://reviews.apache.org/r/21282/#comment76426

Specify UTF8.


- Mike Drob


On May 9, 2014, 9:14 p.m., Sean Busbey wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/21282/
 ---
 
 (Updated May 9, 2014, 9:14 p.m.)
 
 
 Review request for accumulo.
 
 
 Bugs: ACCUMULO-2791
 https://issues.apache.org/jira/browse/ACCUMULO-2791
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
 
 * Povide a core.util Base64 class to enforce the non-chunked behavior we 
 rely on
 * Changed to use codec 1.4 'shaHex' method
 
 
 Diffs
 -
 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
  47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
  33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
  2fc606c919da120b70df8b524a6d733cd1c011c9 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
  54730efa532453d693fa09f641eef8b5fac7b812 
   
 core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
  c219b5ada99095f523b154da1edfcc4f89249210 
   core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
 ab3ea6855124ec85b30328ecdc5d3836cce11f63 
   core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
 9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
   core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
   core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
 cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
   core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
 451d4d65c15de319072982cd4a79ae2909b2d948 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
  6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
  25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
  a27fa4721f471158ef2e6faca5cce8437ae168c1 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
  c212c751743170aa890ac670b2bf071e3796af4a 
   
 core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
  13490e046d4def2d5e74548e07046e7d7c0bb7dc 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
  25009722dc3ac61585caf09f22d812612b873377 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
  1983470d2fb1657af1df429b5dedc7100ae62d2f 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
  1fa9b8f21239b81a3716080023e246f2450fbe0b 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
  72bd7eb95ac7648234e993b28b876f6c5a61ea89 
   pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
   server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
 34abb01fcd61d1fcca1a42c8670284e82cd46080 
   
 server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
  574952339c7183825c7b0560aabed3883f06cd11 
   
 server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
  b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
   
 server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
 30aa2eb3e5db385777271deda8ca09e6f01163b9 
   
 server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java

Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-16 Thread Sean Busbey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/
---

(Updated May 9, 2014, 11:39 p.m.)


Review request for accumulo.


Changes
---

updated test info for ITs passing.


Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791


Repository: accumulo


Description
---

ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

* Povide a core.util Base64 class to enforce the non-chunked behavior we 
rely on
* Changed to use codec 1.4 'shaHex' method


Diffs
-

  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
 47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
 33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
 2fc606c919da120b70df8b524a6d733cd1c011c9 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 54730efa532453d693fa09f641eef8b5fac7b812 
  
core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
 c219b5ada99095f523b154da1edfcc4f89249210 
  core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
ab3ea6855124ec85b30328ecdc5d3836cce11f63 
  core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
  core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
  core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
  core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
451d4d65c15de319072982cd4a79ae2909b2d948 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
 6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
 25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
 a27fa4721f471158ef2e6faca5cce8437ae168c1 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
 c212c751743170aa890ac670b2bf071e3796af4a 
  
core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
 13490e046d4def2d5e74548e07046e7d7c0bb7dc 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
 25009722dc3ac61585caf09f22d812612b873377 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
 1983470d2fb1657af1df429b5dedc7100ae62d2f 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
 1fa9b8f21239b81a3716080023e246f2450fbe0b 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
 72bd7eb95ac7648234e993b28b876f6c5a61ea89 
  pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
  server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
34abb01fcd61d1fcca1a42c8670284e82cd46080 
  
server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
 574952339c7183825c7b0560aabed3883f06cd11 
  
server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
 b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
  server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
30aa2eb3e5db385777271deda8ca09e6f01163b9 
  
server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java 
37ef5f1d00210185bc2b899a68bc32d00c85b35b 
  server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
577f5d516e29270b71099ef0fcf309a5ec36034c 
  
server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
 9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
  test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
41acce2575de571fd48fdf5759455be3c5698e33 
  test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java 
a35a19af152fc7d6406da3aeea429f1909a4590a 

Diff: https://reviews.apache.org/r/21282/diff/


Testing (updated)
---

unit tests passed. full ITs passed. manual cluster tests in progress.


Thanks,

Sean Busbey



Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-15 Thread Sean Busbey


 On May 9, 2014, 9:22 p.m., Mike Drob wrote:
  core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java,
   line 180
  https://reviews.apache.org/r/21282/diff/1/?file=577504#file577504line180
 
  Need to specify UTF8?

the library always does UTF8. Or do you mean in the docs?


- Sean


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/#review42605
---


On May 9, 2014, 9:14 p.m., Sean Busbey wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/21282/
 ---
 
 (Updated May 9, 2014, 9:14 p.m.)
 
 
 Review request for accumulo.
 
 
 Bugs: ACCUMULO-2791
 https://issues.apache.org/jira/browse/ACCUMULO-2791
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
 
 * Povide a core.util Base64 class to enforce the non-chunked behavior we 
 rely on
 * Changed to use codec 1.4 'shaHex' method
 
 
 Diffs
 -
 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
  47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
  33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
  2fc606c919da120b70df8b524a6d733cd1c011c9 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
  54730efa532453d693fa09f641eef8b5fac7b812 
   
 core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
  c219b5ada99095f523b154da1edfcc4f89249210 
   core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
 ab3ea6855124ec85b30328ecdc5d3836cce11f63 
   core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
 9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
   core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
   core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
 cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
   core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
 451d4d65c15de319072982cd4a79ae2909b2d948 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
  6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
  25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
  a27fa4721f471158ef2e6faca5cce8437ae168c1 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
  c212c751743170aa890ac670b2bf071e3796af4a 
   
 core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
  13490e046d4def2d5e74548e07046e7d7c0bb7dc 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
  25009722dc3ac61585caf09f22d812612b873377 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
  1983470d2fb1657af1df429b5dedc7100ae62d2f 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
  1fa9b8f21239b81a3716080023e246f2450fbe0b 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
  72bd7eb95ac7648234e993b28b876f6c5a61ea89 
   pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
   server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
 34abb01fcd61d1fcca1a42c8670284e82cd46080 
   
 server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
  574952339c7183825c7b0560aabed3883f06cd11 
   
 server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
  b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
   
 server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
 30aa2eb3e5db385777271deda8ca09e6f01163b9 
   
 server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
  37ef5f1d00210185bc2b899a68bc32d00c85b35b 
   server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
 577f5d516e29270b71099ef0fcf309a5ec36034c 
   
 server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
  9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
   
 test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
 41acce2575de571fd48fdf5759455be3c5698e33 
   test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java 
 a35a19af152fc7d6406da3aeea429f1909a4590a 
 
 Diff: https://reviews.apache.org

Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-15 Thread Sean Busbey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/
---

(Updated May 10, 2014, 7:01 a.m.)


Review request for accumulo.


Changes
---

tests all good


Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791


Repository: accumulo


Description
---

ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

* Povide a core.util Base64 class to enforce the non-chunked behavior we 
rely on
* Changed to use codec 1.4 'shaHex' method


Diffs
-

  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
 47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
 33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
 2fc606c919da120b70df8b524a6d733cd1c011c9 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 54730efa532453d693fa09f641eef8b5fac7b812 
  
core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
 c219b5ada99095f523b154da1edfcc4f89249210 
  core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
ab3ea6855124ec85b30328ecdc5d3836cce11f63 
  core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
  core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
  core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
  core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
451d4d65c15de319072982cd4a79ae2909b2d948 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
 6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
 25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
 a27fa4721f471158ef2e6faca5cce8437ae168c1 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
 c212c751743170aa890ac670b2bf071e3796af4a 
  
core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
 13490e046d4def2d5e74548e07046e7d7c0bb7dc 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
 25009722dc3ac61585caf09f22d812612b873377 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
 1983470d2fb1657af1df429b5dedc7100ae62d2f 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
 1fa9b8f21239b81a3716080023e246f2450fbe0b 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
 72bd7eb95ac7648234e993b28b876f6c5a61ea89 
  pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
  server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
34abb01fcd61d1fcca1a42c8670284e82cd46080 
  
server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
 574952339c7183825c7b0560aabed3883f06cd11 
  
server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
 b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
  server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
30aa2eb3e5db385777271deda8ca09e6f01163b9 
  
server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java 
37ef5f1d00210185bc2b899a68bc32d00c85b35b 
  server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
577f5d516e29270b71099ef0fcf309a5ec36034c 
  
server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
 9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
  test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
41acce2575de571fd48fdf5759455be3c5698e33 
  test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java 
a35a19af152fc7d6406da3aeea429f1909a4590a 

Diff: https://reviews.apache.org/r/21282/diff/


Testing (updated)
---

unit tests passed. full ITs passed. manual cluster tests pass.


Thanks,

Sean Busbey



Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-14 Thread Mike Drob


 On May 9, 2014, 9:22 p.m., Mike Drob wrote:
  core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java,
   line 180
  https://reviews.apache.org/r/21282/diff/1/?file=577504#file577504line180
 
  Need to specify UTF8?
 
 Sean Busbey wrote:
 the library always does UTF8. Or do you mean in the docs?

I meant in the String constructor. We explicitly specify UTF-8 elswhere.


- Mike


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/#review42605
---


On May 9, 2014, 11:39 p.m., Sean Busbey wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/21282/
 ---
 
 (Updated May 9, 2014, 11:39 p.m.)
 
 
 Review request for accumulo.
 
 
 Bugs: ACCUMULO-2791
 https://issues.apache.org/jira/browse/ACCUMULO-2791
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
 
 * Povide a core.util Base64 class to enforce the non-chunked behavior we 
 rely on
 * Changed to use codec 1.4 'shaHex' method
 
 
 Diffs
 -
 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
  47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
  33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
  2fc606c919da120b70df8b524a6d733cd1c011c9 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
  54730efa532453d693fa09f641eef8b5fac7b812 
   
 core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
  c219b5ada99095f523b154da1edfcc4f89249210 
   core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
 ab3ea6855124ec85b30328ecdc5d3836cce11f63 
   core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
 9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
   core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
   core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
 cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
   core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
 451d4d65c15de319072982cd4a79ae2909b2d948 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
  6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
  25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
  a27fa4721f471158ef2e6faca5cce8437ae168c1 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
  c212c751743170aa890ac670b2bf071e3796af4a 
   
 core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
  13490e046d4def2d5e74548e07046e7d7c0bb7dc 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
  25009722dc3ac61585caf09f22d812612b873377 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
  1983470d2fb1657af1df429b5dedc7100ae62d2f 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
  1fa9b8f21239b81a3716080023e246f2450fbe0b 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
  72bd7eb95ac7648234e993b28b876f6c5a61ea89 
   pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
   server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
 34abb01fcd61d1fcca1a42c8670284e82cd46080 
   
 server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
  574952339c7183825c7b0560aabed3883f06cd11 
   
 server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
  b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
   
 server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
 30aa2eb3e5db385777271deda8ca09e6f01163b9 
   
 server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
  37ef5f1d00210185bc2b899a68bc32d00c85b35b 
   server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
 577f5d516e29270b71099ef0fcf309a5ec36034c 
   
 server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
  9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
   
 test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
 41acce2575de571fd48fdf5759455be3c5698e33 
   test/src/test/java/org/apache/accumulo/test

Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-12 Thread Sean Busbey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/
---

(Updated May 9, 2014, 9:48 p.m.)


Review request for accumulo.


Changes
---

updates per feedback.


Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791


Repository: accumulo


Description
---

ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

* Povide a core.util Base64 class to enforce the non-chunked behavior we 
rely on
* Changed to use codec 1.4 'shaHex' method


Diffs (updated)
-

  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
 47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
 33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
 2fc606c919da120b70df8b524a6d733cd1c011c9 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 54730efa532453d693fa09f641eef8b5fac7b812 
  
core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
 c219b5ada99095f523b154da1edfcc4f89249210 
  core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
ab3ea6855124ec85b30328ecdc5d3836cce11f63 
  core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
  core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
  core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
  core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
451d4d65c15de319072982cd4a79ae2909b2d948 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
 6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
 25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
 a27fa4721f471158ef2e6faca5cce8437ae168c1 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
 c212c751743170aa890ac670b2bf071e3796af4a 
  
core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
 13490e046d4def2d5e74548e07046e7d7c0bb7dc 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
 25009722dc3ac61585caf09f22d812612b873377 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
 1983470d2fb1657af1df429b5dedc7100ae62d2f 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
 1fa9b8f21239b81a3716080023e246f2450fbe0b 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
 72bd7eb95ac7648234e993b28b876f6c5a61ea89 
  pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
  server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
34abb01fcd61d1fcca1a42c8670284e82cd46080 
  
server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
 574952339c7183825c7b0560aabed3883f06cd11 
  
server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
 b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
  server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
30aa2eb3e5db385777271deda8ca09e6f01163b9 
  
server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java 
37ef5f1d00210185bc2b899a68bc32d00c85b35b 
  server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
577f5d516e29270b71099ef0fcf309a5ec36034c 
  
server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
 9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
  test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
41acce2575de571fd48fdf5759455be3c5698e33 
  test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java 
a35a19af152fc7d6406da3aeea429f1909a4590a 

Diff: https://reviews.apache.org/r/21282/diff/


Testing
---

unit tests passed. ITs in progress. manual cluster tests in progress.


Thanks,

Sean Busbey



Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-12 Thread Mike Drob

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/#review42687
---

Ship it!


Ship It!

- Mike Drob


On May 10, 2014, 7:01 a.m., Sean Busbey wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/21282/
 ---
 
 (Updated May 10, 2014, 7:01 a.m.)
 
 
 Review request for accumulo.
 
 
 Bugs: ACCUMULO-2791
 https://issues.apache.org/jira/browse/ACCUMULO-2791
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
 
 * Povide a core.util Base64 class to enforce the non-chunked behavior we 
 rely on
 * Changed to use codec 1.4 'shaHex' method
 
 
 Diffs
 -
 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
  47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
  33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
  2fc606c919da120b70df8b524a6d733cd1c011c9 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
  54730efa532453d693fa09f641eef8b5fac7b812 
   
 core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
  c219b5ada99095f523b154da1edfcc4f89249210 
   core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
 ab3ea6855124ec85b30328ecdc5d3836cce11f63 
   core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
 9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
   core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
   core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
 cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
   core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
 451d4d65c15de319072982cd4a79ae2909b2d948 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
  6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
  25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
  a27fa4721f471158ef2e6faca5cce8437ae168c1 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
  c212c751743170aa890ac670b2bf071e3796af4a 
   
 core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
  13490e046d4def2d5e74548e07046e7d7c0bb7dc 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
  25009722dc3ac61585caf09f22d812612b873377 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
  1983470d2fb1657af1df429b5dedc7100ae62d2f 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
  1fa9b8f21239b81a3716080023e246f2450fbe0b 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
  72bd7eb95ac7648234e993b28b876f6c5a61ea89 
   pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
   server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
 34abb01fcd61d1fcca1a42c8670284e82cd46080 
   
 server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
  574952339c7183825c7b0560aabed3883f06cd11 
   
 server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
  b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
   
 server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
 30aa2eb3e5db385777271deda8ca09e6f01163b9 
   
 server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
  37ef5f1d00210185bc2b899a68bc32d00c85b35b 
   server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
 577f5d516e29270b71099ef0fcf309a5ec36034c 
   
 server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
  9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
   
 test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
 41acce2575de571fd48fdf5759455be3c5698e33 
   test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java 
 a35a19af152fc7d6406da3aeea429f1909a4590a 
 
 Diff: https://reviews.apache.org/r/21282/diff/
 
 
 Testing
 ---
 
 unit tests passed. full ITs passed. manual cluster tests pass.
 
 
 Thanks,
 
 Sean Busbey
 




Re: Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-12 Thread Sean Busbey


 On May 9, 2014, 9:22 p.m., Mike Drob wrote:
  core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java,
   line 180
  https://reviews.apache.org/r/21282/diff/1/?file=577504#file577504line180
 
  Need to specify UTF8?
 
 Sean Busbey wrote:
 the library always does UTF8. Or do you mean in the docs?
 
 Mike Drob wrote:
 I meant in the String constructor. We explicitly specify UTF-8 elswhere.

There's no String constructor now. That was an error in the version I replaced.


- Sean


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/#review42605
---


On May 9, 2014, 11:39 p.m., Sean Busbey wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/21282/
 ---
 
 (Updated May 9, 2014, 11:39 p.m.)
 
 
 Review request for accumulo.
 
 
 Bugs: ACCUMULO-2791
 https://issues.apache.org/jira/browse/ACCUMULO-2791
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.
 
 * Povide a core.util Base64 class to enforce the non-chunked behavior we 
 rely on
 * Changed to use codec 1.4 'shaHex' method
 
 
 Diffs
 -
 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
  47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
  33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
  2fc606c919da120b70df8b524a6d733cd1c011c9 
   
 core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
  54730efa532453d693fa09f641eef8b5fac7b812 
   
 core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
  c219b5ada99095f523b154da1edfcc4f89249210 
   core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
 ab3ea6855124ec85b30328ecdc5d3836cce11f63 
   core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
 9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
   core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
   core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
 cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
   core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
 451d4d65c15de319072982cd4a79ae2909b2d948 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
  6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
  25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
  a27fa4721f471158ef2e6faca5cce8437ae168c1 
   
 core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
  c212c751743170aa890ac670b2bf071e3796af4a 
   
 core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
  13490e046d4def2d5e74548e07046e7d7c0bb7dc 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
  25009722dc3ac61585caf09f22d812612b873377 
   
 core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
  1983470d2fb1657af1df429b5dedc7100ae62d2f 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
  1fa9b8f21239b81a3716080023e246f2450fbe0b 
   
 examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
  72bd7eb95ac7648234e993b28b876f6c5a61ea89 
   pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
   server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
 34abb01fcd61d1fcca1a42c8670284e82cd46080 
   
 server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
  574952339c7183825c7b0560aabed3883f06cd11 
   
 server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
  b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
   
 server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
 30aa2eb3e5db385777271deda8ca09e6f01163b9 
   
 server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
  37ef5f1d00210185bc2b899a68bc32d00c85b35b 
   server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
 577f5d516e29270b71099ef0fcf309a5ec36034c 
   
 server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
  9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
   
 test/src/main/java/org/apache/accumulo/test/randomwalk/shard

Review Request 21282: ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

2014-05-11 Thread Sean Busbey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21282/
---

Review request for accumulo.


Bugs: ACCUMULO-2791
https://issues.apache.org/jira/browse/ACCUMULO-2791


Repository: accumulo


Description
---

ACCUMULO-2791 Downgrade commons-codec to match that provided by Hadoop.

* Povide a core.util Base64 class to enforce the non-chunked behavior we 
rely on
* Changed to use codec 1.4 'shaHex' method


Diffs
-

  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
 47b34e9d682518de33a2fe3a8ac2f1ab2fbeae47 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
 33ca5d2cc5915df0ed6afc88e02d5426a16a0658 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
 2fc606c919da120b70df8b524a6d733cd1c011c9 
  
core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 54730efa532453d693fa09f641eef8b5fac7b812 
  
core/src/main/java/org/apache/accumulo/core/iterators/user/IntersectingIterator.java
 c219b5ada99095f523b154da1edfcc4f89249210 
  core/src/main/java/org/apache/accumulo/core/security/Authorizations.java 
ab3ea6855124ec85b30328ecdc5d3836cce11f63 
  core/src/main/java/org/apache/accumulo/core/security/Credentials.java 
9f8b1bef2e0883f0f020506659fdf5c5fec1bfe6 
  core/src/main/java/org/apache/accumulo/core/util/Base64.java PRE-CREATION 
  core/src/main/java/org/apache/accumulo/core/util/CreateToken.java 
cc6762a35f652fa0c06dae554bfd34f2afc0ca49 
  core/src/main/java/org/apache/accumulo/core/util/Encoding.java 
451d4d65c15de319072982cd4a79ae2909b2d948 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/AddSplitsCommand.java
 6bd260c7afd0a7890abeafeb4b77b7f95fcdd36a 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/CreateTableCommand.java
 25b92bea9e70a2fe5e7287cd286e02633ed7a75c 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/GetSplitsCommand.java
 a27fa4721f471158ef2e6faca5cce8437ae168c1 
  
core/src/main/java/org/apache/accumulo/core/util/shell/commands/HiddenCommand.java
 c212c751743170aa890ac670b2bf071e3796af4a 
  
core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
 13490e046d4def2d5e74548e07046e7d7c0bb7dc 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
 25009722dc3ac61585caf09f22d812612b873377 
  
core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
 1983470d2fb1657af1df429b5dedc7100ae62d2f 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
 1fa9b8f21239b81a3716080023e246f2450fbe0b 
  
examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/BulkIngestExample.java
 72bd7eb95ac7648234e993b28b876f6c5a61ea89 
  pom.xml 96affb5e15dc60fde48dc042e03c8dec4b335474 
  server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java 
34abb01fcd61d1fcca1a42c8670284e82cd46080 
  
server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
 574952339c7183825c7b0560aabed3883f06cd11 
  
server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
 b5d7aba6de6f8ce711201cf8d1e18f551c8da056 
  server/base/src/main/java/org/apache/accumulo/server/util/DumpZookeeper.java 
30aa2eb3e5db385777271deda8ca09e6f01163b9 
  
server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java 
37ef5f1d00210185bc2b899a68bc32d00c85b35b 
  server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java 
577f5d516e29270b71099ef0fcf309a5ec36034c 
  
server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
 9f1bd1fc11437d9d0e61d4e303ad376f5fd0a1e8 
  test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java 
41acce2575de571fd48fdf5759455be3c5698e33 
  test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java 
a35a19af152fc7d6406da3aeea429f1909a4590a 

Diff: https://reviews.apache.org/r/21282/diff/


Testing
---

unit tests passed. ITs in progress. manual cluster tests in progress.


Thanks,

Sean Busbey



Re: Hadoop Summit (San Jose June 3-5)

2014-04-28 Thread Billie Rinaldi
Just announced, an Accumulo Birds of a Feather session at the Hadoop Summit:
http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179840512/

It looks like we have an hour and a half, exact schedule TBD.  Feel free to
contact me if there is any particular content you'd like to see at this
session.

Billie


On Mon, Apr 28, 2014 at 8:52 AM, Donald Miner dmi...@clearedgeit.comwrote:

 I'll be there. Is there interest in having an accumulo meetup like last
 year? Adam/Billie?


 On Mon, Apr 28, 2014 at 11:50 AM, Marc Reichman 
 mreich...@pixelforensics.com wrote:

 Will anyone be there? I wouldn't mind meeting up for a drink, talk about
 Accumulo, projects, etc.

 Looking forward to coming to my first Hadoop-based conference!

 Marc




 --

 Donald Miner
 Chief Technology Officer
 ClearEdge IT Solutions, LLC
 Cell: 443 799 7807
 www.clearedgeit.com



Re: Review Request 19749: ACCUMULO-2566 Hadoop reflection for TeraSortIngest

2014-03-31 Thread Bill Havanki

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19749/#review39059
---

Ship it!


Ship It!

- Bill Havanki


On March 27, 2014, 4:25 p.m., Mike Drob wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19749/
 ---
 
 (Updated March 27, 2014, 4:25 p.m.)
 
 
 Review request for accumulo, Sean Busbey, Eric Newton, and Josh Elser.
 
 
 Bugs: ACCUMULO-2566
 https://issues.apache.org/jira/browse/ACCUMULO-2566
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2566 Hadoop reflection for TeraSortIngest
 
 
 ACCUMULO-2566 Add more Counter-based reflection
 
 Pull reflection out of ContinuousVerify and apply it to server utils as
 well.
 
 
 Diffs
 -
 
   
 src/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TeraSortIngest.java
  0ff2c19fc5dd83744885c54e312c7ced798b1a80 
   
 src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousMoru.java
  1c384ccfd323eec761172c86ebbf970feadf0f9e 
   
 src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousVerify.java
  9441cf5314383705260d6fa85e69597525de421c 
   
 src/server/src/main/java/org/apache/accumulo/server/test/functional/RunTests.java
  7a7e7d3261c6974a8533b3f4476475d4839e7069 
   src/server/src/main/java/org/apache/accumulo/server/util/CountRowKeys.java 
 567639453a2d90be56eb26c7d808e1256a4f3cf0 
   
 src/server/src/main/java/org/apache/accumulo/server/util/reflection/CounterUtils.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/19749/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mike Drob
 




Re: Review Request 19716: ACCUMULO-2564 Backport changes to unify Hadoop 1/2

2014-03-28 Thread Mike Drob

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19716/
---

(Updated March 28, 2014, 7:23 p.m.)


Review request for accumulo, Adam Fuchs, Sean Busbey, Eric Newton, and Josh 
Elser.


Changes
---

Found another place where we needed to reflect.


Summary (updated)
-

ACCUMULO-2564 Backport changes to unify Hadoop 1/2


Bugs: ACCUMULO-2564
https://issues.apache.org/jira/browse/ACCUMULO-2564


Repository: accumulo


Description
---

ACCUMULO-2564 Replace more hadoop 1/2 incompat

Found more instances of context.getConfiguration that need to be reflected

ACCUMULO-2564 Swap out AIF to use reflection

Use reflections to get the configuration from the context for hadoop 1/2

ACCUMULO-2564 Backport changes from ACCUMULO-1809

Author: Eric Newton
Reason: use reflection tricks to update counters

ACCUMULO-2564 Backport changes to unify Hadoop 1/2

This is a backport of the changes originally made for 1.5.0
under ACCUMULO-1421 for binary compatability between Hadoop versions
1 and 2 with the same Accumulo artifacts. This commit is based on the
following original work:

* c9c0d45 (Adam Fuchs)
* d7ba6ca (Christopher Tubbs)
* 261cf36 (Eric Newton)
* cc3c2d8 (Eric Newton)


Diffs (updated)
-

  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
 7cfab8b6e8f2199620c36a50ac067dee53aab6a9 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java
 c9a70eb24ffb21d845e0915d0dc7cebe8035a37e 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
 ed0aebf7476d8db6a968e858c9c4c892dac78bc5 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
 a11096c3f855104db4e3b78a49f3fb039ac268ec 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 ae537f9cd919a3821e40cc0a99bf31815b5bb235 
  src/server/src/main/java/org/apache/accumulo/server/Accumulo.java 
184692c48ce8013067053c1d0f0dd6a7a889473a 
  src/server/src/main/java/org/apache/accumulo/server/master/LogSort.java 
1c384a3779179927d56546138d2cb6d4023c941b 
  
src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousMoru.java
 1c384ccfd323eec761172c86ebbf970feadf0f9e 
  
src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousVerify.java
 9441cf5314383705260d6fa85e69597525de421c 

Diff: https://reviews.apache.org/r/19716/diff/


Testing
---


Thanks,

Mike Drob



Re: Review Request 19716: ACCUMULO-2564 Backport changes to unify Hadoop 1/2

2014-03-28 Thread Mike Drob

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19716/
---

(Updated March 28, 2014, 8:33 p.m.)


Review request for accumulo, Adam Fuchs, Sean Busbey, Eric Newton, and Josh 
Elser.


Changes
---

More issues. Some of the functional tests use ContextFactory, so it's usage 
needed to be updated as well.


Bugs: ACCUMULO-2564
https://issues.apache.org/jira/browse/ACCUMULO-2564


Repository: accumulo


Description
---

ACCUMULO-2564 Replace more hadoop 1/2 incompat

Found more instances of context.getConfiguration that need to be reflected

ACCUMULO-2564 Swap out AIF to use reflection

Use reflections to get the configuration from the context for hadoop 1/2

ACCUMULO-2564 Backport changes from ACCUMULO-1809

Author: Eric Newton
Reason: use reflection tricks to update counters

ACCUMULO-2564 Backport changes to unify Hadoop 1/2

This is a backport of the changes originally made for 1.5.0
under ACCUMULO-1421 for binary compatability between Hadoop versions
1 and 2 with the same Accumulo artifacts. This commit is based on the
following original work:

* c9c0d45 (Adam Fuchs)
* d7ba6ca (Christopher Tubbs)
* 261cf36 (Eric Newton)
* cc3c2d8 (Eric Newton)


Diffs (updated)
-

  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
 7cfab8b6e8f2199620c36a50ac067dee53aab6a9 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java
 c9a70eb24ffb21d845e0915d0dc7cebe8035a37e 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
 ed0aebf7476d8db6a968e858c9c4c892dac78bc5 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
 a11096c3f855104db4e3b78a49f3fb039ac268ec 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 ae537f9cd919a3821e40cc0a99bf31815b5bb235 
  src/core/src/main/java/org/apache/accumulo/core/util/ContextFactory.java 
5a1c2ef6184aeb56408b3cab451bc2477377264f 
  src/server/src/main/java/org/apache/accumulo/server/Accumulo.java 
184692c48ce8013067053c1d0f0dd6a7a889473a 
  src/server/src/main/java/org/apache/accumulo/server/master/LogSort.java 
1c384a3779179927d56546138d2cb6d4023c941b 
  
src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousMoru.java
 1c384ccfd323eec761172c86ebbf970feadf0f9e 
  
src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousVerify.java
 9441cf5314383705260d6fa85e69597525de421c 

Diff: https://reviews.apache.org/r/19716/diff/


Testing
---


Thanks,

Mike Drob



Re: Review Request 19716: ACCUMULO-2564 Backport changes to unify Hadoop 1/2

2014-03-28 Thread Mike Drob

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19716/
---

(Updated March 28, 2014, 8:34 p.m.)


Review request for accumulo, Adam Fuchs, Sean Busbey, Eric Newton, and Josh 
Elser.


Bugs: ACCUMULO-2564
https://issues.apache.org/jira/browse/ACCUMULO-2564


Repository: accumulo


Description
---

ACCUMULO-2564 Replace more hadoop 1/2 incompat

Found more instances of context.getConfiguration that need to be reflected

ACCUMULO-2564 Swap out AIF to use reflection

Use reflections to get the configuration from the context for hadoop 1/2

ACCUMULO-2564 Backport changes from ACCUMULO-1809

Author: Eric Newton
Reason: use reflection tricks to update counters

ACCUMULO-2564 Backport changes to unify Hadoop 1/2

This is a backport of the changes originally made for 1.5.0
under ACCUMULO-1421 for binary compatability between Hadoop versions
1 and 2 with the same Accumulo artifacts. This commit is based on the
following original work:

* c9c0d45 (Adam Fuchs)
* d7ba6ca (Christopher Tubbs)
* 261cf36 (Eric Newton)
* cc3c2d8 (Eric Newton)


Diffs
-

  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
 7cfab8b6e8f2199620c36a50ac067dee53aab6a9 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java
 c9a70eb24ffb21d845e0915d0dc7cebe8035a37e 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
 ed0aebf7476d8db6a968e858c9c4c892dac78bc5 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
 a11096c3f855104db4e3b78a49f3fb039ac268ec 
  
src/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
 ae537f9cd919a3821e40cc0a99bf31815b5bb235 
  src/core/src/main/java/org/apache/accumulo/core/util/ContextFactory.java 
5a1c2ef6184aeb56408b3cab451bc2477377264f 
  src/server/src/main/java/org/apache/accumulo/server/Accumulo.java 
184692c48ce8013067053c1d0f0dd6a7a889473a 
  src/server/src/main/java/org/apache/accumulo/server/master/LogSort.java 
1c384a3779179927d56546138d2cb6d4023c941b 
  
src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousMoru.java
 1c384ccfd323eec761172c86ebbf970feadf0f9e 
  
src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousVerify.java
 9441cf5314383705260d6fa85e69597525de421c 

Diff: https://reviews.apache.org/r/19716/diff/


Testing (updated)
---

Functional tests pass when running a 1.4.5 default profile build against 
cdh4.5.0 (hadoop 2).


Thanks,

Mike Drob



Re: Review Request 19749: ACCUMULO-2566 Hadoop reflection for TeraSortIngest

2014-03-28 Thread Sean Busbey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19749/#review38944
---

Ship it!


Ship It!

- Sean Busbey


On March 27, 2014, 8:25 p.m., Mike Drob wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19749/
 ---
 
 (Updated March 27, 2014, 8:25 p.m.)
 
 
 Review request for accumulo, Sean Busbey, Eric Newton, and Josh Elser.
 
 
 Bugs: ACCUMULO-2566
 https://issues.apache.org/jira/browse/ACCUMULO-2566
 
 
 Repository: accumulo
 
 
 Description
 ---
 
 ACCUMULO-2566 Hadoop reflection for TeraSortIngest
 
 
 ACCUMULO-2566 Add more Counter-based reflection
 
 Pull reflection out of ContinuousVerify and apply it to server utils as
 well.
 
 
 Diffs
 -
 
   
 src/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TeraSortIngest.java
  0ff2c19fc5dd83744885c54e312c7ced798b1a80 
   
 src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousMoru.java
  1c384ccfd323eec761172c86ebbf970feadf0f9e 
   
 src/server/src/main/java/org/apache/accumulo/server/test/continuous/ContinuousVerify.java
  9441cf5314383705260d6fa85e69597525de421c 
   
 src/server/src/main/java/org/apache/accumulo/server/test/functional/RunTests.java
  7a7e7d3261c6974a8533b3f4476475d4839e7069 
   src/server/src/main/java/org/apache/accumulo/server/util/CountRowKeys.java 
 567639453a2d90be56eb26c7d808e1256a4f3cf0 
   
 src/server/src/main/java/org/apache/accumulo/server/util/reflection/CounterUtils.java
  PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/19749/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mike Drob
 




  1   2   3   >