[jira] [Created] (HBASE-26032) Make HRegion.getStores() an O(1) operation
Wei-Chiu Chuang created HBASE-26032: --- Summary: Make HRegion.getStores() an O(1) operation Key: HBASE-26032 URL: https://issues.apache.org/jira/browse/HBASE-26032 Project: HBase Issue Type: Improvement Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Attachments: Screen Shot 2021-06-24 at 3.56.33 PM.png This is a relatively minor issue, but I did spot HRegion.getStores() popping up in my profiler. Checking the code, I realized that HRegion.getStores() allocates a new array list in it, converting the Collection<> to List<>. But it also makes it an O( n ) in space and time complexity. This conversion appears mostly unnecessary, because we only iterate the stores in production code, and so the new ArrayList object is thrown away immediately. Only in a number of test code where we index into the stores. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26032) Make HRegion.getStores() an O(1) operation
[ https://issues.apache.org/jira/browse/HBASE-26032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-26032: Description: This is a relatively minor issue, but I did spot HRegion.getStores() popping up in my profiler. Checking the code, I realized that HRegion.getStores() allocates a new array list in it, converting the Collection<> to List<>. But it also makes it an O( n ) in space and time complexity. This conversion appears mostly unnecessary, because we only iterate the stores in production code, and so the new ArrayList object is thrown away immediately. Only in a number of test code where we index into the stores. I suggest we should return the stores object directly, an O( 1 ) operation. was: This is a relatively minor issue, but I did spot HRegion.getStores() popping up in my profiler. Checking the code, I realized that HRegion.getStores() allocates a new array list in it, converting the Collection<> to List<>. But it also makes it an O( n ) in space and time complexity. This conversion appears mostly unnecessary, because we only iterate the stores in production code, and so the new ArrayList object is thrown away immediately. Only in a number of test code where we index into the stores. > Make HRegion.getStores() an O(1) operation > -- > > Key: HBASE-26032 > URL: https://issues.apache.org/jira/browse/HBASE-26032 > Project: HBase > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: Screen Shot 2021-06-24 at 3.56.33 PM.png > > > This is a relatively minor issue, but I did spot HRegion.getStores() popping > up in my profiler. > Checking the code, I realized that HRegion.getStores() allocates a new array > list in it, converting the Collection<> to List<>. But it also makes it an O( > n ) in space and time complexity. > This conversion appears mostly unnecessary, because we only iterate the > stores in production code, and so the new ArrayList object is thrown away > immediately. Only in a number of test code where we index into the stores. > I suggest we should return the stores object directly, an O( 1 ) operation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23817) The message "Please make sure that backup is enabled on the cluster." is shown even when the backup feature is enabled
[ https://issues.apache.org/jira/browse/HBASE-23817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-23817: --- Assignee: Wei-Chiu Chuang > The message "Please make sure that backup is enabled on the cluster." is > shown even when the backup feature is enabled > -- > > Key: HBASE-23817 > URL: https://issues.apache.org/jira/browse/HBASE-23817 > Project: HBase > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Wei-Chiu Chuang >Priority: Minor > > The following message is shown even when the backup feature is enabled, which > is confusing: > {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver > and restart the cluster > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures
[ https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17427411#comment-17427411 ] Wei-Chiu Chuang commented on HBASE-26047: - Thanks. I got sidetracked by other projects. It would be great to understand the failure in TestHeapSize. The heap size estimate is quite involved and I am not confident i can address them. IIRC TestThreadLocalPoolMap is similar. TestSecureExportSnapshot TestMobSecureExportSnapshot TestVerifyReplicationCrossDiffHdfs --> they all failed for some error inside distcp/MapReduce. To troubleshoot them we need to enable logging for HDFS/YARN in the UT. > [JDK17] Track JDK17 unit test failures > -- > > Key: HBASE-26047 > URL: https://issues.apache.org/jira/browse/HBASE-26047 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Priority: Major > > As of now, there are still two failed unit tests after exporting JDK internal > modules and the modifier access hack. > {noformat} > [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 > s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize > [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes Time elapsed: > 0.041 s <<< FAILURE! > java.lang.AssertionError: expected:<160> but was:<152> > at > org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335) > [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes Time > elapsed: 0.01 s <<< FAILURE! > java.lang.AssertionError: expected:<72> but was:<64> > at > org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134) > [INFO] Running org.apache.hadoop.hbase.io.Tes > [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 > s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain > [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy Time > elapsed: 0.537 s <<< ERROR! > java.lang.NullPointerException: Cannot enter synchronized block because > "this.closeLock" is null > at > org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119) > {noformat} > It appears that JDK17 makes the heap size estimate different than before. Not > sure why. > TestBufferChain.testWithSpy failure might be because of yet another > unexported module. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures
[ https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17429981#comment-17429981 ] Wei-Chiu Chuang commented on HBASE-26047: - Mind to share more details? HBASE-25516 is supposed to fix the modifiers field exception. > [JDK17] Track JDK17 unit test failures > -- > > Key: HBASE-26047 > URL: https://issues.apache.org/jira/browse/HBASE-26047 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Priority: Major > > As of now, there are still two failed unit tests after exporting JDK internal > modules and the modifier access hack. > {noformat} > [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 > s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize > [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes Time elapsed: > 0.041 s <<< FAILURE! > java.lang.AssertionError: expected:<160> but was:<152> > at > org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335) > [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes Time > elapsed: 0.01 s <<< FAILURE! > java.lang.AssertionError: expected:<72> but was:<64> > at > org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134) > [INFO] Running org.apache.hadoop.hbase.io.Tes > [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 > s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain > [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy Time > elapsed: 0.537 s <<< ERROR! > java.lang.NullPointerException: Cannot enter synchronized block because > "this.closeLock" is null > at > org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119) > {noformat} > It appears that JDK17 makes the heap size estimate different than before. Not > sure why. > TestBufferChain.testWithSpy failure might be because of yet another > unexported module. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17085247#comment-17085247 ] Wei-Chiu Chuang commented on HBASE-23833: - Opened PR #1534 and #1535 to get this into the respective branches. The cherrypick are both clean. I intend to get HBase 2.2 to support Hadoop 3.3.0. If not, I'd be equally happy to see HBase 2.3 or 2.4 support Hadoop 3.3.0 > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[330,81] > cannot access org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[380,10] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[422,77] > cannot access org.apache.hadoop.thirdparty.protobuf.Descriptors > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.Descriptors > not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java:[323,64] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvide
[jira] [Commented] (HBASE-24209) Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality
[ https://issues.apache.org/jira/browse/HBASE-24209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17086105#comment-17086105 ] Wei-Chiu Chuang commented on HBASE-24209: - Yeah I'll give it a try. In fact I have been testing Hadoop 3.3 the past few days. The master branch compiles. haven't tried branch-2.3 but I tend to. Unit tests mostly pass -- barring a few tests that break even on Hadoop 3.1.2 (will file jiras for them once I confirm further) > Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality > - > > Key: HBASE-24209 > URL: https://issues.apache.org/jira/browse/HBASE-24209 > Project: HBase > Issue Type: Task > Components: build >Reporter: Nick Dimiduk >Priority: Major > > Since HBASE-23833 we're paying attention to our builds on Hadoop trunk, > currently 3.3.0-SNAPSHOT. Let's add this version to the version lists in > hadoopcheck so our CI will let us know when things break, at least > compile-time anyway. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24209) Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality
[ https://issues.apache.org/jira/browse/HBASE-24209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17086106#comment-17086106 ] Wei-Chiu Chuang commented on HBASE-24209: - All failed tests are MediumTests. Do we run MediumTests on a regular basis (nightly, or precommit?) > Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality > - > > Key: HBASE-24209 > URL: https://issues.apache.org/jira/browse/HBASE-24209 > Project: HBase > Issue Type: Task > Components: build >Reporter: Nick Dimiduk >Priority: Major > > Since HBASE-23833 we're paying attention to our builds on Hadoop trunk, > currently 3.3.0-SNAPSHOT. Let's add this version to the version lists in > hadoopcheck so our CI will let us know when things break, at least > compile-time anyway. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24225) Backport HBASE-23833 to branch-2.2
Wei-Chiu Chuang created HBASE-24225: --- Summary: Backport HBASE-23833 to branch-2.2 Key: HBASE-24225 URL: https://issues.apache.org/jira/browse/HBASE-24225 Project: HBase Issue Type: Sub-task Reporter: Wei-Chiu Chuang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24236) Backport HBASE-22103 to branch-2.2
Wei-Chiu Chuang created HBASE-24236: --- Summary: Backport HBASE-22103 to branch-2.2 Key: HBASE-24236 URL: https://issues.apache.org/jira/browse/HBASE-24236 Project: HBase Issue Type: Sub-task Components: hadoop3, wal Reporter: Wei-Chiu Chuang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24225) Backport HBASE-23833 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-24225: --- Assignee: Wei-Chiu Chuang > Backport HBASE-23833 to branch-2.2 > -- > > Key: HBASE-24225 > URL: https://issues.apache.org/jira/browse/HBASE-24225 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24236) Backport HBASE-22103 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-24236: --- Assignee: Wei-Chiu Chuang > Backport HBASE-22103 to branch-2.2 > -- > > Key: HBASE-24236 > URL: https://issues.apache.org/jira/browse/HBASE-24236 > Project: HBase > Issue Type: Sub-task > Components: hadoop3, wal >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24237) Backport HBASE-23998 to branch-2.2
Wei-Chiu Chuang created HBASE-24237: --- Summary: Backport HBASE-23998 to branch-2.2 Key: HBASE-24237 URL: https://issues.apache.org/jira/browse/HBASE-24237 Project: HBase Issue Type: Sub-task Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24238) Remove unused $hadoop.guava.version property from pom file
Wei-Chiu Chuang created HBASE-24238: --- Summary: Remove unused $hadoop.guava.version property from pom file Key: HBASE-24238 URL: https://issues.apache.org/jira/browse/HBASE-24238 Project: HBase Issue Type: Task Reporter: Wei-Chiu Chuang HBASE-24170 removed hadoop-2.0 profile and hadoop.guava.version is therefore not used afterwards. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24238) Remove unused $hadoop.guava.version property from pom file
[ https://issues.apache.org/jira/browse/HBASE-24238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17090147#comment-17090147 ] Wei-Chiu Chuang commented on HBASE-24238: - Similarly, netty.hadoop.version should simply set to 3.10.5.Final. There's no need to redefine its value in the hadoop-3.0 profile. > Remove unused $hadoop.guava.version property from pom file > -- > > Key: HBASE-24238 > URL: https://issues.apache.org/jira/browse/HBASE-24238 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Priority: Trivial > > HBASE-24170 removed hadoop-2.0 profile and hadoop.guava.version is therefore > not used afterwards. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24240) TestDelegationTokenWithEncryption always fails
Wei-Chiu Chuang created HBASE-24240: --- Summary: TestDelegationTokenWithEncryption always fails Key: HBASE-24240 URL: https://issues.apache.org/jira/browse/HBASE-24240 Project: HBase Issue Type: Bug Components: security, test Affects Versions: 3.0.0 Reporter: Wei-Chiu Chuang TestDelegationTokenWithEncryption and TestGenerateDelegationToken _always_ fail. Incidentally, they don't fail in branch-2.3 and branch-2.2. I suspect there's a regression with delegation token code, because if I comment out the following code in the test, they pass: {code:java} try (Connection conn = ConnectionFactory.createConnection(TEST_UTIL.getConfiguration())) { Token token = TokenUtil.obtainToken(conn); UserGroupInformation.getCurrentUser().addToken(token); } {code} Effectively, use Kerberos to login instead of delegation token. The tests fail all the time (100%) in the last 29 runs: [https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/master/lastSuccessfulBuild/artifact/dashboard.html] Initially I thought this was caused by pluggable authentication (HBASE-23347), but the tests don't fail in branch-2.3 so looks unlikely. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24240) TestDelegationTokenWithEncryption always fails
[ https://issues.apache.org/jira/browse/HBASE-24240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24240: Priority: Blocker (was: Major) > TestDelegationTokenWithEncryption always fails > -- > > Key: HBASE-24240 > URL: https://issues.apache.org/jira/browse/HBASE-24240 > Project: HBase > Issue Type: Bug > Components: security, test >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Priority: Blocker > > TestDelegationTokenWithEncryption and TestGenerateDelegationToken _always_ > fail. > > Incidentally, they don't fail in branch-2.3 and branch-2.2. > > I suspect there's a regression with delegation token code, because if I > comment out the following code in the test, they pass: > > {code:java} > try (Connection conn = > ConnectionFactory.createConnection(TEST_UTIL.getConfiguration())) { > Token token = TokenUtil.obtainToken(conn); > UserGroupInformation.getCurrentUser().addToken(token); > } > {code} > Effectively, use Kerberos to login instead of delegation token. > The tests fail all the time (100%) in the last 29 runs: > > [https://builds.apache.org/job/HBase-Find-Flaky-Tests/job/master/lastSuccessfulBuild/artifact/dashboard.html] > Initially I thought this was caused by pluggable authentication > (HBASE-23347), but the tests don't fail in branch-2.3 so looks unlikely. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24238) Remove unused $hadoop.guava.version property from pom file
[ https://issues.apache.org/jira/browse/HBASE-24238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-24238: --- Assignee: Wei-Chiu Chuang > Remove unused $hadoop.guava.version property from pom file > -- > > Key: HBASE-24238 > URL: https://issues.apache.org/jira/browse/HBASE-24238 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > > HBASE-24170 removed hadoop-2.0 profile and hadoop.guava.version is therefore > not used afterwards. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24238) Clean up root pom after removing hadoop-2.0 profile
[ https://issues.apache.org/jira/browse/HBASE-24238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24238: Summary: Clean up root pom after removing hadoop-2.0 profile (was: Remove unused $hadoop.guava.version property from pom file) > Clean up root pom after removing hadoop-2.0 profile > --- > > Key: HBASE-24238 > URL: https://issues.apache.org/jira/browse/HBASE-24238 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > > HBASE-24170 removed hadoop-2.0 profile and hadoop.guava.version is therefore > not used afterwards. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24238) Clean up root pom after removing hadoop-2.0 profile
[ https://issues.apache.org/jira/browse/HBASE-24238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24238: Fix Version/s: 3.0.0 Affects Version/s: 3.0.0 > Clean up root pom after removing hadoop-2.0 profile > --- > > Key: HBASE-24238 > URL: https://issues.apache.org/jira/browse/HBASE-24238 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Fix For: 3.0.0 > > > HBASE-24170 removed hadoop-2.0 profile and hadoop.guava.version is therefore > not used afterwards. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24258) [Hadoop3.3] Update license for org.ow2.asm:*
Wei-Chiu Chuang created HBASE-24258: --- Summary: [Hadoop3.3] Update license for org.ow2.asm:* Key: HBASE-24258 URL: https://issues.apache.org/jira/browse/HBASE-24258 Project: HBase Issue Type: Task Components: dependencies Reporter: Wei-Chiu Chuang Hadoop 3.3 brings a few Jetty dependencies which transitively brings in org.ow2.asm:asm-analysis, org.ow2.asm:asm-commons, org.ow2.asm:asm-tree. When testing with the latest Jetty (9.4.26.v20200117) I found its org.ow2.asm:* updated from 7.1 to 7.2, which changed the declared license from "BSD" to "BSD-3-Clause License" (The actual license text did not change). The HBase's license checker doesn't accept it. File the jira to update it to "BSD 3-Clause License" so that HBase can build. {noformat} [INFO] | | | +- org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.4.26.v20200117:test [INFO] | | | | +- org.eclipse.jetty:jetty-annotations:jar:9.4.26.v20200117:test [INFO] | | | | | +- org.eclipse.jetty:jetty-plus:jar:9.4.26.v20200117:test [INFO] | | | | | | \- org.eclipse.jetty:jetty-jndi:jar:9.4.26.v20200117:test [INFO] | | | | | \- org.ow2.asm:asm-commons:jar:7.2:test [INFO] | | | | | +- org.ow2.asm:asm-tree:jar:7.2:test [INFO] | | | | | \- org.ow2.asm:asm-analysis:jar:7.2:test {noformat} {noformat} This product includes asm-analysis licensed under the BSD-3-Clause. ERROR: Please check this License for acceptability here: https://www.apache.org/legal/resolved If it is okay, then update the list named 'non_aggregate_fine' in the LICENSE.vm file. If it isn't okay, then revert the change that added the dependency. More info on the dependency: org.ow2.asm asm-analysis 7.2 {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24258) [Hadoop3.3] Update license for org.ow2.asm:*
[ https://issues.apache.org/jira/browse/HBASE-24258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-24258: --- Assignee: Wei-Chiu Chuang > [Hadoop3.3] Update license for org.ow2.asm:* > > > Key: HBASE-24258 > URL: https://issues.apache.org/jira/browse/HBASE-24258 > Project: HBase > Issue Type: Task > Components: dependencies >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > > Hadoop 3.3 brings a few Jetty dependencies which transitively brings in > org.ow2.asm:asm-analysis, org.ow2.asm:asm-commons, org.ow2.asm:asm-tree. > When testing with the latest Jetty (9.4.26.v20200117) I found its > org.ow2.asm:* updated from 7.1 to 7.2, which changed the declared license > from "BSD" to "BSD-3-Clause License" (The actual license text did not > change). The HBase's license checker doesn't accept it. > File the jira to update it to "BSD 3-Clause License" so that HBase can build. > {noformat} > [INFO] | | | +- > org.eclipse.jetty.websocket:javax-websocket-server-impl:jar:9.4.26.v20200117:test > [INFO] | | | | +- > org.eclipse.jetty:jetty-annotations:jar:9.4.26.v20200117:test > [INFO] | | | | | +- > org.eclipse.jetty:jetty-plus:jar:9.4.26.v20200117:test > [INFO] | | | | | | \- > org.eclipse.jetty:jetty-jndi:jar:9.4.26.v20200117:test > [INFO] | | | | | \- org.ow2.asm:asm-commons:jar:7.2:test > [INFO] | | | | | +- org.ow2.asm:asm-tree:jar:7.2:test > [INFO] | | | | | \- org.ow2.asm:asm-analysis:jar:7.2:test > {noformat} > {noformat} > This product includes asm-analysis licensed under the BSD-3-Clause. > ERROR: Please check this License for acceptability here: > https://www.apache.org/legal/resolved > If it is okay, then update the list named 'non_aggregate_fine' in the > LICENSE.vm file. > If it isn't okay, then revert the change that added the dependency. > More info on the dependency: > org.ow2.asm > asm-analysis > 7.2 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24209) [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality
[ https://issues.apache.org/jira/browse/HBASE-24209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24209: Summary: [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality (was: Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality) > [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality > - > > Key: HBASE-24209 > URL: https://issues.apache.org/jira/browse/HBASE-24209 > Project: HBase > Issue Type: Task > Components: build >Reporter: Nick Dimiduk >Priority: Major > > Since HBASE-23833 we're paying attention to our builds on Hadoop trunk, > currently 3.3.0-SNAPSHOT. Let's add this version to the version lists in > hadoopcheck so our CI will let us know when things break, at least > compile-time anyway. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-24209) [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality
[ https://issues.apache.org/jira/browse/HBASE-24209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-24209 started by Wei-Chiu Chuang. --- > [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality > - > > Key: HBASE-24209 > URL: https://issues.apache.org/jira/browse/HBASE-24209 > Project: HBase > Issue Type: Task > Components: build >Reporter: Nick Dimiduk >Assignee: Wei-Chiu Chuang >Priority: Major > > Since HBASE-23833 we're paying attention to our builds on Hadoop trunk, > currently 3.3.0-SNAPSHOT. Let's add this version to the version lists in > hadoopcheck so our CI will let us know when things break, at least > compile-time anyway. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24209) [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality
[ https://issues.apache.org/jira/browse/HBASE-24209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-24209: --- Assignee: Wei-Chiu Chuang > [Hadoop3.3] Add Hadoop-3.3.0-SNAPSHOT to hadoopcheck in our yetus personality > - > > Key: HBASE-24209 > URL: https://issues.apache.org/jira/browse/HBASE-24209 > Project: HBase > Issue Type: Task > Components: build >Reporter: Nick Dimiduk >Assignee: Wei-Chiu Chuang >Priority: Major > > Since HBASE-23833 we're paying attention to our builds on Hadoop trunk, > currently 3.3.0-SNAPSHOT. Let's add this version to the version lists in > hadoopcheck so our CI will let us know when things break, at least > compile-time anyway. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24261) Redo all of our github notification integrations on new ASF infra feature
[ https://issues.apache.org/jira/browse/HBASE-24261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17093878#comment-17093878 ] Wei-Chiu Chuang commented on HBASE-24261: - I think we have this problem in Hadoop as well. > Redo all of our github notification integrations on new ASF infra feature > - > > Key: HBASE-24261 > URL: https://issues.apache.org/jira/browse/HBASE-24261 > Project: HBase > Issue Type: Task > Components: community >Reporter: Sean Busbey >Priority: Major > > The new [ASF Infra feature for customizing how project gets notifications > from > github|https://cwiki.apache.org/confluence/display/INFRA/.asf.yaml+features+for+git+repositories#id-.asf.yamlfeaturesforgitrepositories-Notificationsettingsforrepositories] > appears to have silently thrown away all the integration we already had set > up. > I don't know that full set of things we need. We presumably need to do this > for all of our repos. > * make sure all notifications on PRs is going to issues@ > * make sure we get links on JIRA for related PRs > * make sure we do not get updates on JIRA for every PR comment -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24277) TestZooKeeper is flaky
[ https://issues.apache.org/jira/browse/HBASE-24277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094656#comment-17094656 ] Wei-Chiu Chuang commented on HBASE-24277: - branch-2 is much more stable. There are a lot of test failures (some 100% failure rate) in master. > TestZooKeeper is flaky > -- > > Key: HBASE-24277 > URL: https://issues.apache.org/jira/browse/HBASE-24277 > Project: HBase > Issue Type: Bug > Components: test, Zookeeper >Reporter: Duo Zhang >Priority: Major > > After checking the code, the problem is that, when creating table during > master shutdown, it is easy to hit MasterStoppedException or other strange > exceptions which make the creation fail. > In general I think this should be a test issue, need to learn why we do not > have the problem on branch-2.x. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23834) HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jetty version mismatch
[ https://issues.apache.org/jira/browse/HBASE-23834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17095833#comment-17095833 ] Wei-Chiu Chuang commented on HBASE-23834: - Some updates here: It looks like shading Jetty is not enough. Our internal tests found HBase must use SslContextFactory.server instead of SslContextFactory in Jetty 9.4. The similar change is also seen in Hadoop's Jetty 9.4 update patch: HADOOP-16152. Hadoop 3.1.4 is going to release soon which will contain the Jetty 9.4 change. Maybe we should move to use Hadoop 3.1.4 in the HBase master branch, and drop Jetty 9.3 entirely. > HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jetty version mismatch > > > Key: HBASE-23834 > URL: https://issues.apache.org/jira/browse/HBASE-23834 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > HBase master branch is currently on Jetty 9.3, and latest Hadoop 3 > (unreleased branches trunk, branch-3.2 and branch-3.1) bumped Jetty to 9.4 to > address a vulnerability CVE-2017-9735. > (1) Jetty 9.3 and 9.4 are quite different (there are incompatible API > changes) and HBase won't start on the latest Hadoop 3. > (2) In any case, HBase should update its Jetty dependency to address the > vulnerability. > Fortunately for HBase, updating to Jetty 9.4 requires no code change other > than the maven version string. > More tests are needed to verify if HBase can run on older Hadoop versions if > its Jetty is updated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-24237) Backport HBASE-23998 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-24237 started by Wei-Chiu Chuang. --- > Backport HBASE-23998 to branch-2.2 > -- > > Key: HBASE-24237 > URL: https://issues.apache.org/jira/browse/HBASE-24237 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24237) Backport HBASE-23998 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-24237. - Fix Version/s: 2.2.5 Resolution: Fixed This was merged by https://github.com/apache/hbase/pull/1568 > Backport HBASE-23998 to branch-2.2 > -- > > Key: HBASE-24237 > URL: https://issues.apache.org/jira/browse/HBASE-24237 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.2.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24236) Backport HBASE-22103 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-24236. - Fix Version/s: 2.2.5 Resolution: Fixed This was merged by https://github.com/apache/hbase/pull/1566 > Backport HBASE-22103 to branch-2.2 > -- > > Key: HBASE-24236 > URL: https://issues.apache.org/jira/browse/HBASE-24236 > Project: HBase > Issue Type: Sub-task > Components: hadoop3, wal >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.2.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24225) Backport HBASE-23833 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-24225. - Fix Version/s: 2.2.5 Resolution: Fixed Merged in https://github.com/apache/hbase/pull/1567 > Backport HBASE-23833 to branch-2.2 > -- > > Key: HBASE-24225 > URL: https://issues.apache.org/jira/browse/HBASE-24225 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.2.5 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-24225) Backport HBASE-23833 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-24225 started by Wei-Chiu Chuang. --- > Backport HBASE-23833 to branch-2.2 > -- > > Key: HBASE-24225 > URL: https://issues.apache.org/jira/browse/HBASE-24225 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24277) TestZooKeeper is flaky
[ https://issues.apache.org/jira/browse/HBASE-24277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096623#comment-17096623 ] Wei-Chiu Chuang commented on HBASE-24277: - Hmm not sure what was changed but a few days ago I checked the flaky test dashboard and TestDelegationTokenWithEncryption and TestGenerateDelegationToken were failing 35 out of 35 runs. (HBASE-24240) > TestZooKeeper is flaky > -- > > Key: HBASE-24277 > URL: https://issues.apache.org/jira/browse/HBASE-24277 > Project: HBase > Issue Type: Bug > Components: test, Zookeeper >Reporter: Duo Zhang >Priority: Major > > After checking the code, the problem is that, when creating table during > master shutdown, it is easy to hit MasterStoppedException or other strange > exceptions which make the creation fail. > In general I think this should be a test issue, need to learn why we do not > have the problem on branch-2.x. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Moved] (HBASE-24374) Potential Race in class org.apache.hadoop.hbase.io.crypto.Encryption
[ https://issues.apache.org/jira/browse/HBASE-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang moved HADOOP-17043 to HBASE-24374: -- Component/s: (was: common) Key: HBASE-24374 (was: HADOOP-17043) Affects Version/s: (was: 2.8.0) Project: HBase (was: Hadoop Common) > Potential Race in class org.apache.hadoop.hbase.io.crypto.Encryption > > > Key: HBASE-24374 > URL: https://issues.apache.org/jira/browse/HBASE-24374 > Project: HBase > Issue Type: Bug >Reporter: Bozhen Liu >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24374) Potential Race in class org.apache.hadoop.hbase.io.crypto.Encryption
[ https://issues.apache.org/jira/browse/HBASE-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107576#comment-17107576 ] Wei-Chiu Chuang commented on HBASE-24374: - [~bz_liu]would you please fill in more information? Otherwise this jira is useless. > Potential Race in class org.apache.hadoop.hbase.io.crypto.Encryption > > > Key: HBASE-24374 > URL: https://issues.apache.org/jira/browse/HBASE-24374 > Project: HBase > Issue Type: Bug >Reporter: Bozhen Liu >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-19256) [hbase-thirdparty] shade jetty
[ https://issues.apache.org/jira/browse/HBASE-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-19256: --- Assignee: Wei-Chiu Chuang > [hbase-thirdparty] shade jetty > -- > > Key: HBASE-19256 > URL: https://issues.apache.org/jira/browse/HBASE-19256 > Project: HBase > Issue Type: Task > Components: dependencies, thirdparty >Reporter: Mike Drob >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22103) HDFS-13209 in Hadoop 3.3.0 breaks asyncwal
[ https://issues.apache.org/jira/browse/HBASE-22103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057491#comment-17057491 ] Wei-Chiu Chuang commented on HBASE-22103: - Haven't checked but is the JDK11 test depend on the upcoming Hadoop 3.3.0? HBASE-23833 is required too. But this one (HBASE-22103) is easier. Happy to resume it. > HDFS-13209 in Hadoop 3.3.0 breaks asyncwal > -- > > Key: HBASE-22103 > URL: https://issues.apache.org/jira/browse/HBASE-22103 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-22103.master.001.patch > > > HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} > and broke asyncwal. > {noformat} > 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] > asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize > access to HDFS internals. Please update your WAL Provider to not make use of > the 'asyncfs' provider. See HBASE-16110 for more information. > java.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, > org.apache.hadoop.fs.permission.FsPermission, java.lang.String, > org.apache.hadoop.io.EnumSetWritable, boolean, short, long, > [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) > at java.lang.Class.getMethod(Class.java:1786) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) > at > org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) > at > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) > at > org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.newWAL(TestAsyncFSWAL.java:72) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractTestFSWAL.testWALComparator(AbstractTestFSWAL.java:194) > {noformat} > Credit: this bug was found by [~gabor.bota] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23861) Reconcile Hadoop version
[ https://issues.apache.org/jira/browse/HBASE-23861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058903#comment-17058903 ] Wei-Chiu Chuang commented on HBASE-23861: - I think it depends on the support matrix you guys want for HBase 2.2/2.1. If you want to support Hadoop 3.2 and above, this is a must. If the community decide not to cherry pick this bug fix into 2.2/2.1, I'd suggest to update the hbase user guide http://hbase.apache.org/book.html#hadoop to indicate Hadoop 3.2 is known to break for HBase 2.2/2.1. > Reconcile Hadoop version > > > Key: HBASE-23861 > URL: https://issues.apache.org/jira/browse/HBASE-23861 > Project: HBase > Issue Type: Bug > Components: dependencies >Affects Versions: 3.0.0 > Environment: Apache Maven 3.6.1 > Hadoop 3.2.0 and above. >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > I followed the HBase book > http://hbase.apache.org/book.html#maven.build.hadoop and wanted to build > HBase (master) on top of Hadoop 3.2/3.3 but tests failed right away. > Build: > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop-three.version=3.2.1 > -DskipTests > {code} > Test: > {code} > mvn test -Dtest=TestHelloHBase -Dhadoop.profile=3.0 > -Dhadoop-three.version=3.2.1 > {code} > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.296 > s <<< FAILURE! - in > org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase > [ERROR] org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase > Time elapsed: 1.284 s <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/hdfs/protocol/HdfsConstants$StoragePolicySatisfierMode > at > org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase.beforeClass(TestHelloHBase.java:54) > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hdfs.protocol.HdfsConstants$StoragePolicySatisfierMode > at > org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase.beforeClass(TestHelloHBase.java:54) > {noformat} > Adding mvn -X parameter, I was able to tell that it was because the > hbase-server module includes hadoop-distcp and hadoop-dfs-client 3.1.2 (the > default Hadoop 3 dependency version) while it uses version 3.2.1 of other > hadoop jars . The classpath conflict (the storage policy satisfier is a new > feature in Hadoop 3.2) failed the test. > This is reproducible on any Hadoop version 3.2 and above. It looks to me the > version of hadoop-distcp and hadoop-hdfs-client should be specified at the > top level pom (they are specified in hbase-server/pom.xml). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23998) Update license for jetty-client
[ https://issues.apache.org/jira/browse/HBASE-23998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-23998: --- Assignee: Wei-Chiu Chuang > Update license for jetty-client > --- > > Key: HBASE-23998 > URL: https://issues.apache.org/jira/browse/HBASE-23998 > Project: HBase > Issue Type: Bug > Components: build, dependencies >Affects Versions: 3.0.0 > Environment: HBase master branch on Apache Hadoop 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > After HBASE-22103, compiling on Haddop 3.3.0 has the following error: > {{mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > -DskipTests -Dmaven.javadoc.skip=true}} > {noformat} > This product includes Jetty :: Asynchronous HTTP Client licensed under the > Apache Software License - Version 2.0. > ERROR: Please check this License for acceptability here: > https://www.apache.org/legal/resolved > If it is okay, then update the list named 'non_aggregate_fine' in the > LICENSE.vm file. > If it isn't okay, then revert the change that added the dependency. > More info on the dependency: > org.eclipse.jetty > jetty-client > 9.4.20.v20190813 > {noformat} > This is caused by YARN-8778 which added dependency on > org.eclipse.jetty.websocket:websocket-client, and the Jetty 9.4 update in > HADOOP-16152. > {noformat} > [INFO] +- > org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.3.0-SNAPSHOT:compile > [INFO] | +- org.apache.hadoop:hadoop-yarn-client:jar:3.3.0-SNAPSHOT:compile > [INFO] | | +- > org.eclipse.jetty.websocket:websocket-client:jar:9.4.20.v20190813:compile > [INFO] | | | +- org.eclipse.jetty:jetty-client:jar:9.4.20.v20190813:compile > [INFO] | | | \- > org.eclipse.jetty.websocket:websocket-common:jar:9.4.20.v20190813:compile > [INFO] | | | \- > org.eclipse.jetty.websocket:websocket-api:jar:9.4.20.v20190813:compile > {noformat} > Propose: update > hbase-resource-bundle/src/main/resources/supplemental-models.xml to update > the license text for jetty-client. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23998) Update license for jetty-client
Wei-Chiu Chuang created HBASE-23998: --- Summary: Update license for jetty-client Key: HBASE-23998 URL: https://issues.apache.org/jira/browse/HBASE-23998 Project: HBase Issue Type: Bug Components: build, dependencies Affects Versions: 3.0.0 Environment: HBase master branch on Apache Hadoop 3.3.0 Reporter: Wei-Chiu Chuang After HBASE-22103, compiling on Haddop 3.3.0 has the following error: {{mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT -DskipTests -Dmaven.javadoc.skip=true}} {noformat} This product includes Jetty :: Asynchronous HTTP Client licensed under the Apache Software License - Version 2.0. ERROR: Please check this License for acceptability here: https://www.apache.org/legal/resolved If it is okay, then update the list named 'non_aggregate_fine' in the LICENSE.vm file. If it isn't okay, then revert the change that added the dependency. More info on the dependency: org.eclipse.jetty jetty-client 9.4.20.v20190813 {noformat} This is caused by YARN-8778 which added dependency on org.eclipse.jetty.websocket:websocket-client, and the Jetty 9.4 update in HADOOP-16152. {noformat} [INFO] +- org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.3.0-SNAPSHOT:compile [INFO] | +- org.apache.hadoop:hadoop-yarn-client:jar:3.3.0-SNAPSHOT:compile [INFO] | | +- org.eclipse.jetty.websocket:websocket-client:jar:9.4.20.v20190813:compile [INFO] | | | +- org.eclipse.jetty:jetty-client:jar:9.4.20.v20190813:compile [INFO] | | | \- org.eclipse.jetty.websocket:websocket-common:jar:9.4.20.v20190813:compile [INFO] | | | \- org.eclipse.jetty.websocket:websocket-api:jar:9.4.20.v20190813:compile {noformat} Propose: update hbase-resource-bundle/src/main/resources/supplemental-models.xml to update the license text for jetty-client. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-23998) Update license for jetty-client
[ https://issues.apache.org/jira/browse/HBASE-23998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-23998 started by Wei-Chiu Chuang. --- > Update license for jetty-client > --- > > Key: HBASE-23998 > URL: https://issues.apache.org/jira/browse/HBASE-23998 > Project: HBase > Issue Type: Bug > Components: build, dependencies >Affects Versions: 3.0.0 > Environment: HBase master branch on Apache Hadoop 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > After HBASE-22103, compiling on Haddop 3.3.0 has the following error: > {{mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > -DskipTests -Dmaven.javadoc.skip=true}} > {noformat} > This product includes Jetty :: Asynchronous HTTP Client licensed under the > Apache Software License - Version 2.0. > ERROR: Please check this License for acceptability here: > https://www.apache.org/legal/resolved > If it is okay, then update the list named 'non_aggregate_fine' in the > LICENSE.vm file. > If it isn't okay, then revert the change that added the dependency. > More info on the dependency: > org.eclipse.jetty > jetty-client > 9.4.20.v20190813 > {noformat} > This is caused by YARN-8778 which added dependency on > org.eclipse.jetty.websocket:websocket-client, and the Jetty 9.4 update in > HADOOP-16152. > {noformat} > [INFO] +- > org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.3.0-SNAPSHOT:compile > [INFO] | +- org.apache.hadoop:hadoop-yarn-client:jar:3.3.0-SNAPSHOT:compile > [INFO] | | +- > org.eclipse.jetty.websocket:websocket-client:jar:9.4.20.v20190813:compile > [INFO] | | | +- org.eclipse.jetty:jetty-client:jar:9.4.20.v20190813:compile > [INFO] | | | \- > org.eclipse.jetty.websocket:websocket-common:jar:9.4.20.v20190813:compile > [INFO] | | | \- > org.eclipse.jetty.websocket:websocket-api:jar:9.4.20.v20190813:compile > {noformat} > Propose: update > hbase-resource-bundle/src/main/resources/supplemental-models.xml to update > the license text for jetty-client. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060548#comment-17060548 ] Wei-Chiu Chuang commented on HBASE-23833: - not sure what changed but this is no longer reproducible, at least in branch master. > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[330,81] > cannot access org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[380,10] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[422,77] > cannot access org.apache.hadoop.thirdparty.protobuf.Descriptors > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.Descriptors > not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java:[323,64] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java:[209,68] > invalid method reference > [ERROR] non-static method get() cannot be referenced from a static context > {noformat} -- This message was sent by Atlassian Jir
[jira] [Commented] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060554#comment-17060554 ] Wei-Chiu Chuang commented on HBASE-23833: - Correction: still reproducible. Use {{mvn clean install -Dhadoop.profile=3.0 -Dhadoop-three.version=3.3.0-SNAPSHOT}}. > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[330,81] > cannot access org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[380,10] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[422,77] > cannot access org.apache.hadoop.thirdparty.protobuf.Descriptors > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.Descriptors > not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java:[323,64] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java:[209,68] > invalid method reference > [ERROR] non-static method get() cannot be referenced from a static context > {noformat} -- This
[jira] [Comment Edited] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060554#comment-17060554 ] Wei-Chiu Chuang edited comment on HBASE-23833 at 3/17/20, 1:05 AM: --- Correction: still reproducible. Use {{mvn clean install -Dhadoop.profile=3.0 -Dhadoop-three.version=3.3.0-SNAPSHOT}}. {code:java} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project hbase-server: Compilation failure: Compilation failure: [ERROR] /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,47] incompatible types: com.google.protobuf.ByteString cannot be converted to org.apache.hadoop.thirdparty.protobuf.ByteString [ERROR] /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] incompatible types: org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto cannot be converted to com.google.protobuf.MessageLite [ERROR] /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[332,66] incompatible types: org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto cannot be converted to com.google.protobuf.MessageLite [ERROR] /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java:[323,64] incompatible types: org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto cannot be converted to com.google.protobuf.MessageLite {code} was (Author: jojochuang): Correction: still reproducible. Use {{mvn clean install -Dhadoop.profile=3.0 -Dhadoop-three.version=3.3.0-SNAPSHOT}}. > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/w
[jira] [Assigned] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-23833: --- Assignee: Wei-Chiu Chuang > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[330,81] > cannot access org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[380,10] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[422,77] > cannot access org.apache.hadoop.thirdparty.protobuf.Descriptors > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.Descriptors > not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java:[323,64] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java:[209,68] > invalid method reference > [ERROR] non-static method get() cannot be referenced from a static context > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-23833: Status: Patch Available (was: Open) > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[330,81] > cannot access org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[380,10] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[422,77] > cannot access org.apache.hadoop.thirdparty.protobuf.Descriptors > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.Descriptors > not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java:[323,64] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.PipelineAckProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/SyncReplicationWALProvider.java:[209,68] > invalid method reference > [ERROR] non-static method get() cannot be referenced from a static context > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-23834) HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jetty version mismatch
[ https://issues.apache.org/jira/browse/HBASE-23834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-23834 started by Wei-Chiu Chuang. --- > HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jetty version mismatch > > > Key: HBASE-23834 > URL: https://issues.apache.org/jira/browse/HBASE-23834 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > HBase master branch is currently on Jetty 9.3, and latest Hadoop 3 > (unreleased branches trunk, branch-3.2 and branch-3.1) bumped Jetty to 9.4 to > address a vulnerability CVE-2017-9735. > (1) Jetty 9.3 and 9.4 are quite different (there are incompatible API > changes) and HBase won't start on the latest Hadoop 3. > (2) In any case, HBase should update its Jetty dependency to address the > vulnerability. > Fortunately for HBase, updating to Jetty 9.4 requires no code change other > than the maven version string. > More tests are needed to verify if HBase can run on older Hadoop versions if > its Jetty is updated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23861) Reconcile Hadoop version
[ https://issues.apache.org/jira/browse/HBASE-23861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-23861. - Resolution: Fixed Resolve this and leave the fix in master and branch-2.3. I'll file a Jira to update the user guide when applicable. Thanks! > Reconcile Hadoop version > > > Key: HBASE-23861 > URL: https://issues.apache.org/jira/browse/HBASE-23861 > Project: HBase > Issue Type: Bug > Components: dependencies >Affects Versions: 3.0.0 > Environment: Apache Maven 3.6.1 > Hadoop 3.2.0 and above. >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > I followed the HBase book > http://hbase.apache.org/book.html#maven.build.hadoop and wanted to build > HBase (master) on top of Hadoop 3.2/3.3 but tests failed right away. > Build: > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop-three.version=3.2.1 > -DskipTests > {code} > Test: > {code} > mvn test -Dtest=TestHelloHBase -Dhadoop.profile=3.0 > -Dhadoop-three.version=3.2.1 > {code} > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.296 > s <<< FAILURE! - in > org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase > [ERROR] org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase > Time elapsed: 1.284 s <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/hdfs/protocol/HdfsConstants$StoragePolicySatisfierMode > at > org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase.beforeClass(TestHelloHBase.java:54) > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hdfs.protocol.HdfsConstants$StoragePolicySatisfierMode > at > org.apache.hbase.archetypes.exemplars.shaded_client.TestHelloHBase.beforeClass(TestHelloHBase.java:54) > {noformat} > Adding mvn -X parameter, I was able to tell that it was because the > hbase-server module includes hadoop-distcp and hadoop-dfs-client 3.1.2 (the > default Hadoop 3 dependency version) while it uses version 3.2.1 of other > hadoop jars . The classpath conflict (the storage policy satisfier is a new > feature in Hadoop 3.2) failed the test. > This is reproducible on any Hadoop version 3.2 and above. It looks to me the > version of hadoop-distcp and hadoop-hdfs-client should be specified at the > top level pom (they are specified in hbase-server/pom.xml). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23833) The relocated hadoop-thirdparty protobuf breaks HBase asyncwal
[ https://issues.apache.org/jira/browse/HBASE-23833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062943#comment-17062943 ] Wei-Chiu Chuang commented on HBASE-23833: - Thanks [~stack] for your time! Hadoop 3.3.0 migrates to Protobuf 3.7.1. Initially I thought, another approach to this (and perhaps easier solution), is to ship another version of hbase-shaded-netty. The trick is to relocate the reference of com.google.protobuf.* classes in hbase-shaded-netty to point to org.apache.hadoop.thirdparty.protobuf.* The problem is that netty is not only used for HBase-to-HDFS communication, but also HBase internal communication. What you suggested makes more sense. And it's not entirely crazy (no more crazier than the reflection hack in the whole FanOut stuff :) ) I'll study it further. > The relocated hadoop-thirdparty protobuf breaks HBase asyncwal > -- > > Key: HBASE-23833 > URL: https://issues.apache.org/jira/browse/HBASE-23833 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > Hadoop trunk (3.3.0) shaded protobuf and moved it to hadoop-thirdparty. As > the result, hbase asyncwal fails to compile because asyncwal uses the > Hadoop's protobuf objects. > The following command > {code} > mvn clean install -Dhadoop.profile=3.0 -Dhadoop.version=3.3.0-SNAPSHOT > {code} > fails with the following error: > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile > (default-compile) on project hbase-server: Compilation failure: Compilation > failure: > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[361,44] > cannot access org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.MessageOrBuilder not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[362,14] > cannot access org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.GeneratedMessageV3 not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[366,16] > cannot access org.apache.hadoop.thirdparty.protobuf.ByteString > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.ByteString not > found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[375,12] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java:[702,81] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferEncryptorMessageProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[314,66] > incompatible types: > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto > cannot be converted to com.google.protobuf.MessageLite > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[330,81] > cannot access org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum > [ERROR] class file for > org.apache.hadoop.thirdparty.protobuf.ProtocolMessageEnum not found > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[380,10] > cannot find symbol > [ERROR] symbol: method > writeDelimitedTo(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) > [ERROR] location: variable proto of type > org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto > [ERROR] > /Users/weichiu/sandbox/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputHelper.java:[422,77] > cannot access org.apache.hadoop.thirdparty.protobuf.Descriptors > [ERROR] class file for org.apache.hadoop.thirdparty.protobuf.Descriptors > not found > [ERROR] > /Users/w
[jira] [Created] (HBASE-24027) Spotbugs: Return value of putIfAbsent is ignored
Wei-Chiu Chuang created HBASE-24027: --- Summary: Spotbugs: Return value of putIfAbsent is ignored Key: HBASE-24027 URL: https://issues.apache.org/jira/browse/HBASE-24027 Project: HBase Issue Type: Bug Components: master Reporter: Wei-Chiu Chuang Looks like a regression from HBASE-23561. [https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1301/2/artifact/yetus-general-check/output/branch-spotbugs-hbase-server-warnings.html] {quote} Return value of putIfAbsent is ignored, but node is reused in org.apache.hadoop.hbase.master.assignment.RegionStates.createRegionStateNode(RegionInfo) Bug type RV_RETURN_VALUE_OF_PUTIFABSENT_IGNORED (click for details) In class org.apache.hadoop.hbase.master.assignment.RegionStates In method org.apache.hadoop.hbase.master.assignment.RegionStates.createRegionStateNode(RegionInfo) Called method java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(Object, Object) Type org.apache.hadoop.hbase.master.assignment.RegionStateNode Value loaded from node At RegionStates.java:[line 133] {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-8868) add metric to report client shortcircuit reads
[ https://issues.apache.org/jira/browse/HBASE-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17065186#comment-17065186 ] Wei-Chiu Chuang commented on HBASE-8868: Reviving this one ... Rebased the patch and posted it as a PR in github. > add metric to report client shortcircuit reads > -- > > Key: HBASE-8868 > URL: https://issues.apache.org/jira/browse/HBASE-8868 > Project: HBase > Issue Type: Improvement > Components: metrics, regionserver >Affects Versions: 0.94.8, 0.95.1 >Reporter: Viral Bajaria >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HBASE-8868.master.001.patch > > > With the availability of shortcircuit reads, when the feature is enabled > there is no metric which exposes how many times the regionserver was able to > shortcircuit the read and not make a IPC to the datanode. > It will be great to add the metric and expose it via Ganglia. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (HBASE-23829) Get `-PrunSmallTests` passing on JDK11
[ https://issues.apache.org/jira/browse/HBASE-23829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reopened HBASE-23829: - Reopen this. I am getting consistent test failure as reported in HBASE-23976, and I traced it to this commit. I am on JDK 8. {noformat} 2020-03-23 20:57:04,934 ERROR [Time-limited test] bucket.BucketCache(312): Can't restore from file[/Users/weichiu/sandbox/hbase/hbase-server/target/test-data/c9d48c67-87ed-70cb-e19c-4dc6c14c29c6/bucket.persistence] because of java.io.IOException: Mismatch of checksum! The persistent checksum is `9"�0����X!ɍ=, but the calculate checksum is 1��h&�B���D(� at org.apache.hadoop.hbase.io.hfile.bucket.PersistentIOEngine.verifyFileIntegrity(PersistentIOEngine.java:55) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.parsePB(BucketCache.java:1158) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.retrieveFromFile(BucketCache.java:1106) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:310) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:258) at org.apache.hadoop.hbase.io.hfile.bucket.TestVerifyBucketCacheFile.testRetrieveFromFile(TestVerifyBucketCacheFile.java:116) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runners.Suite.runChild(Suite.java:128) at org.junit.runners.Suite.runChild(Suite.java:27) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282) at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) at java.util.concurrent.FutureTask.run(FutureTask.java) at java.lang.Thread.run(Thread.java:748) {noformat} > Get `-PrunSmallTests` passing on JDK11 > -- > > Key: HBASE-23829 > URL: https://issues.apache.org/jira/browse/HBASE-23829 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > Start with the small tests, shaking out issues identified by the harness. So > far it seems like {{-Dhadoop.profile=3.0}} and > {{-Dhadoop-three.version=3.3.0-SNAPSHOT}} maybe be required. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23976) [flakey test] TestVerifyBucketCacheFile
[ https://issues.apache.org/jira/browse/HBASE-23976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17065299#comment-17065299 ] Wei-Chiu Chuang commented on HBASE-23976: - I am seeing the exact same test failure after HBASE-23829. (master branch 059c1894516d1ebd211b3490615365631b159849) The only difference is it is reproducible for me reliably 100% of the time. > [flakey test] TestVerifyBucketCacheFile > --- > > Key: HBASE-23976 > URL: https://issues.apache.org/jira/browse/HBASE-23976 > Project: HBase > Issue Type: Test > Components: regionserver, test >Affects Versions: 3.0.0 >Reporter: Nick Dimiduk >Priority: Major > > I see sporadic failures in this test class. Sometimes a failure on > {{assertTrue(file.delete())}}, an inconsistent annoyance. However, this one > looks more sinister. > {noformat} > 2020-03-12 12:11:35,059 ERROR [Time-limited test] bucket.BucketCache(312): > Can't restore from > file[/Users/ndimiduk/repos/apache/hbase/hbase-server/target/test-data/5e5c5f5f-d5c2-94b2-8ce9-cf561f4f19f7/bucket.persistence] > because of > java.io.IOException: Mismatch of checksum! The persistent checksum is > ���Bk���2�Ӏk, but the calculate checksum is > �o���r��w��c��4 > at > org.apache.hadoop.hbase.io.hfile.bucket.PersistentIOEngine.verifyFileIntegrity(PersistentIOEngine.java:55) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.parsePB(BucketCache.java:1158) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.retrieveFromFile(BucketCache.java:1106) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:310) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:258) > at > org.apache.hadoop.hbase.io.hfile.bucket.TestVerifyBucketCacheFile.testRetrieveFromFile(TestVerifyBucketCacheFile.java:116) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23829) Get `-PrunSmallTests` passing on JDK11
[ https://issues.apache.org/jira/browse/HBASE-23829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17065960#comment-17065960 ] Wei-Chiu Chuang commented on HBASE-23829: - Yes it does happen to me all the time. I wonder if it anything to do with my local environment (Mac, SSD), but the same test failure is also seen in another github PR: https://github.com/apache/hbase/pull/1334 > Get `-PrunSmallTests` passing on JDK11 > -- > > Key: HBASE-23829 > URL: https://issues.apache.org/jira/browse/HBASE-23829 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > Start with the small tests, shaking out issues identified by the harness. So > far it seems like {{-Dhadoop.profile=3.0}} and > {{-Dhadoop-three.version=3.3.0-SNAPSHOT}} maybe be required. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23976) [flakey test] TestVerifyBucketCacheFile
[ https://issues.apache.org/jira/browse/HBASE-23976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17066235#comment-17066235 ] Wei-Chiu Chuang commented on HBASE-23976: - I am on the master branch, reproducible on both hadoop 2 / hadoop 3 profile, JDK 1.8.0_221, Mac. The checksum mismatch does not fail the test after all. It was the expected behavior. However, an assertion to verify the bucket cache persistence file can be deleted, doesn't pass: {noformat} 2020-03-24 14:28:35,546 ERROR [Time-limited test] bucket.BucketCache(312): Can't restore from file[/Users/weichiu/sandbox/hbase/hbase-server/target/test-data/7eb1e4ba-52c2-f94d-3c7b-4fdff57c8ad4/bucket.persistence] because of java.io.IOException: Mismatch of checksum! The persistent checksum is N~*b��&��cȗJF, but the calculate checksum is n�.�ʼ�H���:�� at org.apache.hadoop.hbase.io.hfile.bucket.PersistentIOEngine.verifyFileIntegrity(PersistentIOEngine.java:55) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.parsePB(BucketCache.java:1158) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.retrieveFromFile(BucketCache.java:1106) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:310) at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.(BucketCache.java:258) at org.apache.hadoop.hbase.io.hfile.bucket.TestVerifyBucketCacheFile.testRetrieveFromFile(TestVerifyBucketCacheFile.java:116) ... 2020-03-24 14:28:35,552 INFO [Time-limited test] bucket.BucketCache(329): Started bucket cache; ioengine=file:/Users/weichiu/sandbox/hbase/hbase-server/target/test-data/7eb1e4ba-52c2-f94d-3c7b-4fdff57c8ad4/bucket.cache, capacity=32 MB, blockSize=8 KB, writerThreadNum=3, writerQLen=64, persistencePath=/Users/weichiu/sandbox/hbase/hbase-server/target/test-data/7eb1e4ba-52c2-f94d-3c7b-4fdff57c8ad4/bucket.persistence, bucketAllocator=org.apache.hadoop.hbase.io.hfile.bucket.BucketAllocator 2020-03-24 14:28:35,655 INFO [Time-limited test] bucket.BucketCache(1210): Shutdown bucket cache: IO persistent=true; path to write=/Users/weichiu/sandbox/hbase/hbase-server/target/test-data/7eb1e4ba-52c2-f94d-3c7b-4fdff57c8ad4/bucket.persistence 2020-03-24 14:28:35,655 INFO [Time-limited test-BucketCacheWriter-2] bucket.BucketCache$WriterThread(914): Time-limited test-BucketCacheWriter-2 exiting, cacheEnabled=false 2020-03-24 14:28:35,655 INFO [Time-limited test-BucketCacheWriter-0] bucket.BucketCache$WriterThread(914): Time-limited test-BucketCacheWriter-0 exiting, cacheEnabled=false 2020-03-24 14:28:35,655 INFO [Time-limited test-BucketCacheWriter-1] bucket.BucketCache$WriterThread(914): Time-limited test-BucketCacheWriter-1 exiting, cacheEnabled=false java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at org.junit.Assert.assertTrue(Assert.java:42) at org.junit.Assert.assertTrue(Assert.java:53) at org.apache.hadoop.hbase.io.hfile.bucket.TestVerifyBucketCacheFile.testRetrieveFromFile(TestVerifyBucketCacheFile.java:132) ... {noformat} Mysteriously, the persistence file is gone right after it's written. JDK bug? {code} private void persistToFile() throws IOException { assert !cacheEnabled; if (!ioEngine.isPersistent()) { throw new IOException("Attempt to persist non-persistent cache mappings!"); } try (FileOutputStream fos = new FileOutputStream(persistencePath, false)) { fos.write(ProtobufMagic.PB_MAGIC); BucketProtoUtils.toPB(this).writeDelimitedTo(fos); } --> persistence file is gone from my local dir at this point. } {code} > [flakey test] TestVerifyBucketCacheFile > --- > > Key: HBASE-23976 > URL: https://issues.apache.org/jira/browse/HBASE-23976 > Project: HBase > Issue Type: Test > Components: regionserver, test >Affects Versions: 3.0.0 >Reporter: Nick Dimiduk >Priority: Major > > I see sporadic failures in this test class. Sometimes a failure on > {{assertTrue(file.delete())}}, an inconsistent annoyance. However, this one > looks more sinister. > {noformat} > 2020-03-12 12:11:35,059 ERROR [Time-limited test] bucket.BucketCache(312): > Can't restore from > file[/Users/ndimiduk/repos/apache/hbase/hbase-server/target/test-data/5e5c5f5f-d5c2-94b2-8ce9-cf561f4f19f7/bucket.persistence] > because of > java.io.IOException: Mismatch of checksum! The persistent checksum is > ���Bk���2�Ӏk, but the calculate checksum is > �o���r��w��c��4 > at > org.apache.hadoop.hbase.io.hfile.bucket.PersistentIOEngine.verifyFileIntegrity(PersistentIOEngine.java:55) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.parsePB(BucketCache.java:1158) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.retrieveFromFile(BucketCache.java:1106)
[jira] [Created] (HBASE-24087) Backport HBASE-8868 to branch-2.2
Wei-Chiu Chuang created HBASE-24087: --- Summary: Backport HBASE-8868 to branch-2.2 Key: HBASE-24087 URL: https://issues.apache.org/jira/browse/HBASE-24087 Project: HBase Issue Type: Task Reporter: Wei-Chiu Chuang There's just a trivial conflict in import. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-8868) add metric to report client shortcircuit reads
[ https://issues.apache.org/jira/browse/HBASE-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17071415#comment-17071415 ] Wei-Chiu Chuang commented on HBASE-8868: [~reidchan] i'll take a look. > add metric to report client shortcircuit reads > -- > > Key: HBASE-8868 > URL: https://issues.apache.org/jira/browse/HBASE-8868 > Project: HBase > Issue Type: Improvement > Components: metrics, regionserver >Affects Versions: 0.94.8, 0.95.1 >Reporter: Viral Bajaria >Assignee: Wei-Chiu Chuang >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > Attachments: HBASE-8868.master.001.patch > > > With the availability of shortcircuit reads, when the feature is enabled > there is no metric which exposes how many times the regionserver was able to > shortcircuit the read and not make a IPC to the datanode. > It will be great to add the metric and expose it via Ganglia. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24087) Backport HBASE-8868 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-24087: --- Assignee: Wei-Chiu Chuang > Backport HBASE-8868 to branch-2.2 > - > > Key: HBASE-24087 > URL: https://issues.apache.org/jira/browse/HBASE-24087 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > There's just a trivial conflict in import. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24087) Backport HBASE-8868 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24087: Status: Patch Available (was: Open) > Backport HBASE-8868 to branch-2.2 > - > > Key: HBASE-24087 > URL: https://issues.apache.org/jira/browse/HBASE-24087 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > There's just a trivial conflict in import. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24124) hbase-filesystem to use guava from hbase-thirdparty
Wei-Chiu Chuang created HBASE-24124: --- Summary: hbase-filesystem to use guava from hbase-thirdparty Key: HBASE-24124 URL: https://issues.apache.org/jira/browse/HBASE-24124 Project: HBase Issue Type: Task Components: Filesystem Integration Affects Versions: 1.0.0-alpha2 Reporter: Wei-Chiu Chuang hbase-filesystem repo is on guava23.0: {noformat} $ grep -r "guava" . ./pom.xml:23.0 ./hbase-oss/pom.xml: com.google.guava ./hbase-oss/pom.xml: guava ./hbase-oss/pom.xml: ${guava.version} ./hbase-oss/pom.xml:
[jira] [Updated] (HBASE-24124) hbase-filesystem to use guava from hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-24124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24124: Fix Version/s: 1.0.0-alpha2 > hbase-filesystem to use guava from hbase-thirdparty > --- > > Key: HBASE-24124 > URL: https://issues.apache.org/jira/browse/HBASE-24124 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Affects Versions: 1.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Priority: Major > Fix For: 1.0.0-alpha2 > > > hbase-filesystem repo is on guava23.0: > {noformat} > $ grep -r "guava" . > ./pom.xml:23.0 > ./hbase-oss/pom.xml: com.google.guava > ./hbase-oss/pom.xml: guava > ./hbase-oss/pom.xml: ${guava.version} > ./hbase-oss/pom.xml:
[jira] [Updated] (HBASE-24124) hbase-filesystem to use guava from hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-24124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24124: Affects Version/s: (was: 1.0.0-alpha2) 1.0.0-alpha1 > hbase-filesystem to use guava from hbase-thirdparty > --- > > Key: HBASE-24124 > URL: https://issues.apache.org/jira/browse/HBASE-24124 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Affects Versions: 1.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Priority: Major > Fix For: 1.0.0-alpha2 > > > hbase-filesystem repo is on guava23.0: > {noformat} > $ grep -r "guava" . > ./pom.xml:23.0 > ./hbase-oss/pom.xml: com.google.guava > ./hbase-oss/pom.xml: guava > ./hbase-oss/pom.xml: ${guava.version} > ./hbase-oss/pom.xml:
[jira] [Created] (HBASE-24125) hbase-filesystem to use hbase-thirdparty 3.2.0
Wei-Chiu Chuang created HBASE-24125: --- Summary: hbase-filesystem to use hbase-thirdparty 3.2.0 Key: HBASE-24125 URL: https://issues.apache.org/jira/browse/HBASE-24125 Project: HBase Issue Type: Task Affects Versions: 1.0.0-alpha1 Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Fix For: 1.0.0-alpha2 hbase-filesystem is currently on hbase-thirdparty 2.2.1. Update it to 3.2.0 so we can use the latest guava. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24125) hbase-filesystem to use hbase-thirdparty 3.2.0
[ https://issues.apache.org/jira/browse/HBASE-24125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24125: Component/s: Filesystem Integration > hbase-filesystem to use hbase-thirdparty 3.2.0 > -- > > Key: HBASE-24125 > URL: https://issues.apache.org/jira/browse/HBASE-24125 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Affects Versions: 1.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 1.0.0-alpha2 > > > hbase-filesystem is currently on hbase-thirdparty 2.2.1. Update it to 3.2.0 > so we can use the latest guava. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24125) hbase-filesystem to use hbase-thirdparty 3.2.0
[ https://issues.apache.org/jira/browse/HBASE-24125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24125: Priority: Trivial (was: Major) > hbase-filesystem to use hbase-thirdparty 3.2.0 > -- > > Key: HBASE-24125 > URL: https://issues.apache.org/jira/browse/HBASE-24125 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Affects Versions: 1.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Fix For: 1.0.0-alpha2 > > > hbase-filesystem is currently on hbase-thirdparty 2.2.1. Update it to 3.2.0 > so we can use the latest guava. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24124) hbase-filesystem to use guava from hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-24124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-24124. - Resolution: Fixed Thanks the review from [~tamaas] and [~busbey]! > hbase-filesystem to use guava from hbase-thirdparty > --- > > Key: HBASE-24124 > URL: https://issues.apache.org/jira/browse/HBASE-24124 > Project: HBase > Issue Type: Task > Components: Filesystem Integration >Affects Versions: 1.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 1.0.0-alpha2 > > > hbase-filesystem repo is on guava23.0: > {noformat} > $ grep -r "guava" . > ./pom.xml:23.0 > ./hbase-oss/pom.xml: com.google.guava > ./hbase-oss/pom.xml: guava > ./hbase-oss/pom.xml: ${guava.version} > ./hbase-oss/pom.xml:
[jira] [Updated] (HBASE-24087) Backport HBASE-8868 to branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-24087: Fix Version/s: 2.2.5 Resolution: Fixed Status: Resolved (was: Patch Available) The PR is merged. Resolve this jira. > Backport HBASE-8868 to branch-2.2 > - > > Key: HBASE-24087 > URL: https://issues.apache.org/jira/browse/HBASE-24087 > Project: HBase > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Fix For: 2.2.5 > > > There's just a trivial conflict in import. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-22953) Supporting Hadoop 3.3.0
[ https://issues.apache.org/jira/browse/HBASE-22953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HBASE-22953. - Fix Version/s: 2.3.0 3.0.0-alpha-1 Resolution: Fixed > Supporting Hadoop 3.3.0 > --- > > Key: HBASE-22953 > URL: https://issues.apache.org/jira/browse/HBASE-22953 > Project: HBase > Issue Type: Umbrella >Reporter: Wei-Chiu Chuang >Priority: Major > Fix For: 2.3.0, 3.0.0-alpha-1 > > > The Hadoop community has started to discuss a 3.3.0 release. > [http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201908.mbox/%3CCAD%2B%2BeCneLtC%2BkfxRRKferufnNxhaXXGa0YPaVp%3DEBbc-R5JfqA%40mail.gmail.com%3E] > While still early, it wouldn't hurt to start exploring what's coming in > Hadoop 3.3.0. In particular, there are a bunch of new features that brings in > all sorts of new dependencies. > > I will use this Jira to list things that are related to Hadoop 3.3.0. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26691) Replacing log4j with reload4j for branch-2.x
[ https://issues.apache.org/jira/browse/HBASE-26691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479781#comment-17479781 ] Wei-Chiu Chuang commented on HBASE-26691: - +1 > Replacing log4j with reload4j for branch-2.x > > > Key: HBASE-26691 > URL: https://issues.apache.org/jira/browse/HBASE-26691 > Project: HBase > Issue Type: Task > Components: logging >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.5.0, 2.4.10 > > > There are several new CVEs for log4j1 now. > As it is not suitable to upgrade to log4j2 for 2.x releases, let's replace > the log4j1 dependencies with reload4j. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26691) Replacing log4j with reload4j for branch-2.x
[ https://issues.apache.org/jira/browse/HBASE-26691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479836#comment-17479836 ] Wei-Chiu Chuang commented on HBASE-26691: - The reload4j is a drop-in replacement of log4j1. Although in reality, the shading makes it not so trivial as it sounds... > Replacing log4j with reload4j for branch-2.x > > > Key: HBASE-26691 > URL: https://issues.apache.org/jira/browse/HBASE-26691 > Project: HBase > Issue Type: Task > Components: logging >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.5.0, 2.4.10 > > > There are several new CVEs for log4j1 now. > As it is not suitable to upgrade to log4j2 for 2.x releases, let's replace > the log4j1 dependencies with reload4j. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26691) Replacing log4j with reload4j for branch-2.x
[ https://issues.apache.org/jira/browse/HBASE-26691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479838#comment-17479838 ] Wei-Chiu Chuang commented on HBASE-26691: - There's a DISCUSS thread in Hadoop's dev ML. We should start one in HBase's dev ML. > Replacing log4j with reload4j for branch-2.x > > > Key: HBASE-26691 > URL: https://issues.apache.org/jira/browse/HBASE-26691 > Project: HBase > Issue Type: Task > Components: logging >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Critical > Fix For: 2.5.0, 2.4.10 > > > There are several new CVEs for log4j1 now. > As it is not suitable to upgrade to log4j2 for 2.x releases, let's replace > the log4j1 dependencies with reload4j. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26046) [JDK17] Add a JDK17 profile
[ https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17483558#comment-17483558 ] Wei-Chiu Chuang commented on HBASE-26046: - I had worked on it but it's a shame I thought the PR was up for review. You would need these for JDK17: https://github.com/jojochuang/hbase/commit/b909db7ca7c221308ad5aba1ea58317c77358b94 I'm tied up with the log4j stuff right now so wont' be able to continue on it. Feel free to pick this up [~ndimiduk] > [JDK17] Add a JDK17 profile > --- > > Key: HBASE-26046 > URL: https://issues.apache.org/jira/browse/HBASE-26046 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > While HBase builds fine with JDK17, tests fail because a number of Java SDK > modules are no longer exposed to unnamed modules by default. We need to open > them up. > Without which, the tests fail for errors like: > {noformat} > [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 > s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel > [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel > Time elapsed: 0.273 s <<< ERROR! > java.lang.ExceptionInInitializerError > at > org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43) > Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make > protected final java.lang.Class > java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws > java.lang.ClassFormatError accessible: module java.base does not "opens > java.lang" to unnamed module @56ef9176 > at > org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43) > {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (HBASE-26046) [JDK17] Add a JDK17 profile
[ https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-26046: --- Assignee: (was: Wei-Chiu Chuang) > [JDK17] Add a JDK17 profile > --- > > Key: HBASE-26046 > URL: https://issues.apache.org/jira/browse/HBASE-26046 > Project: HBase > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Priority: Major > > While HBase builds fine with JDK17, tests fail because a number of Java SDK > modules are no longer exposed to unnamed modules by default. We need to open > them up. > Without which, the tests fail for errors like: > {noformat} > [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 > s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel > [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel > Time elapsed: 0.273 s <<< ERROR! > java.lang.ExceptionInInitializerError > at > org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43) > Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make > protected final java.lang.Class > java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws > java.lang.ClassFormatError accessible: module java.base does not "opens > java.lang" to unnamed module @56ef9176 > at > org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43) > {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
[ https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17490051#comment-17490051 ] Wei-Chiu Chuang commented on HBASE-26734: - I wrote that part of the code so I'll take a closer look. slf4j-log4j2 doesn't seem related to this issue. Is it reproducible? Could you run it at debug log level? Just like to confirm it's not the older Hadoop artifacts that got slipped in accidentally. > FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1 > -- > > Key: HBASE-26734 > URL: https://issues.apache.org/jira/browse/HBASE-26734 > Project: HBase > Issue Type: Sub-task > Environment: JDK: jdk1.8.0_221 > Hadoop: hadoop-3.3.1 > Hbase: hbase-2.3.1 / hbase-2.3.7 >Reporter: chen qing >Priority: Major > Attachments: hbase-root-master-master.log, > hbase-root-regionserver-slave01.log > > > I just had the same problem when i started the hbase cluster. HRegionServers > were started and HMaster threw an exception. > This is HMaster's log: > {code:java} > 2022-02-05 18:07:51,323 WARN [RS-EventLoopGroup-1-1] > concurrent.DefaultPromise: An exception was thrown by > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete() > java.lang.IllegalArgumentException: object is not an instance of declaring > class > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104) > at > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:615) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:653) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529) > at > org.apache.hba
[jira] [Assigned] (HBASE-25646) Possible Resource Leak in CatalogJanitor
[ https://issues.apache.org/jira/browse/HBASE-25646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-25646: --- Assignee: Narges Shadab > Possible Resource Leak in CatalogJanitor > > > Key: HBASE-25646 > URL: https://issues.apache.org/jira/browse/HBASE-25646 > Project: HBase > Issue Type: Bug >Reporter: Narges Shadab >Assignee: Narges Shadab >Priority: Major > > We noticed a possible resource leak > [here|https://github.com/apache/hbase/blob/53128fe7c17e6220113884fbad69d75c59ed56b7/hbase-server/src/main/java/org/apache/hadoop/hbase/master/janitor/CatalogJanitor.java#L411]. > {{close()}} is never called on {{inStream}}. > I'll submit a pull request to fix it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25685) asyncprofiler2.0 no longer supports svg; wants html
[ https://issues.apache.org/jira/browse/HBASE-25685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-25685: --- Assignee: Wei-Chiu Chuang > asyncprofiler2.0 no longer supports svg; wants html > --- > > Key: HBASE-25685 > URL: https://issues.apache.org/jira/browse/HBASE-25685 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Assignee: Wei-Chiu Chuang >Priority: Major > > asyncprofiler2.0 is out. Its a nice tool. Unfortunately, it dropped the svg > formatting option that we use in our servlet. Now it wants you to pass html. > Lets fix. > Old -o on asyncprofiler1.x > -o fmtoutput format: summary|traces|flat|collapsed|svg|tree|jfr > New -o asyncprofiler 2.x > -o fmtoutput format: flat|traces|collapsed|flamegraph|tree|jfr > If you pass svg to 2.0, it does nothing ... If you run the command hbase is > running you see: > {code} > /tmp/prof-output$ sudo -u hbase /usr/lib/async-profiler/profiler.sh -e cpu -d > 10 -o svg -f /tmp/prof-output/async-prof-pid-8346-cpu-1x.svg 8346 > [ERROR] SVG format is obsolete, use .html for FlameGraph > {code} > At a minimum can make it so the OUTPUT param supports HTML. Here is current > enum state: > {code} > enum Output { > SUMMARY, > TRACES, > FLAT, > COLLAPSED, > SVG, > TREE, > JFR > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25685) asyncprofiler2.0 no longer supports svg; wants html
[ https://issues.apache.org/jira/browse/HBASE-25685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-25685: --- Assignee: Michael Stack (was: Wei-Chiu Chuang) > asyncprofiler2.0 no longer supports svg; wants html > --- > > Key: HBASE-25685 > URL: https://issues.apache.org/jira/browse/HBASE-25685 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.3 > > > asyncprofiler2.0 is out. Its a nice tool. Unfortunately, it dropped the svg > formatting option that we use in our servlet. Now it wants you to pass html. > Lets fix. > Old -o on asyncprofiler1.x > -o fmtoutput format: summary|traces|flat|collapsed|svg|tree|jfr > New -o asyncprofiler 2.x > -o fmtoutput format: flat|traces|collapsed|flamegraph|tree|jfr > If you pass svg to 2.0, it does nothing ... If you run the command hbase is > running you see: > {code} > /tmp/prof-output$ sudo -u hbase /usr/lib/async-profiler/profiler.sh -e cpu -d > 10 -o svg -f /tmp/prof-output/async-prof-pid-8346-cpu-1x.svg 8346 > [ERROR] SVG format is obsolete, use .html for FlameGraph > {code} > At a minimum can make it so the OUTPUT param supports HTML. Here is current > enum state: > {code} > enum Output { > SUMMARY, > TRACES, > FLAT, > COLLAPSED, > SVG, > TREE, > JFR > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25771) Recommend Hadoop 3.x
Wei-Chiu Chuang created HBASE-25771: --- Summary: Recommend Hadoop 3.x Key: HBASE-25771 URL: https://issues.apache.org/jira/browse/HBASE-25771 Project: HBase Issue Type: Task Reporter: Wei-Chiu Chuang We have this section in the hbase book: {quote} Hadoop 2.x is recommended. Hadoop 2.x is faster and includes features, such as short-circuit reads (see Leveraging local data), which will help improve your HBase random read profile. Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience. HBase does not support running with earlier versions of Hadoop. See the table below for requirements specific to different HBase versions. Hadoop 3.x is still in early access releases and has not yet been sufficiently tested by the HBase community for production use cases. {quote} The Hadoop 2.x development is winding down. 2.10.x will be the last dev branch. On the contrary, we've got years of production users on Hadoop 3.x already so I think it's time to change our stance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25771) Recommend Hadoop 3.x
[ https://issues.apache.org/jira/browse/HBASE-25771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17320984#comment-17320984 ] Wei-Chiu Chuang commented on HBASE-25771: - Apart from a number of edge cases (which are subsequently fixed) Hadoop 2.10 client should be able to talk to a hadoop 3.x cluster without problems. > Recommend Hadoop 3.x > > > Key: HBASE-25771 > URL: https://issues.apache.org/jira/browse/HBASE-25771 > Project: HBase > Issue Type: Task > Components: documentation >Reporter: Wei-Chiu Chuang >Priority: Major > > We have this section in the hbase book: > {quote} > Hadoop 2.x is recommended. > Hadoop 2.x is faster and includes features, such as short-circuit reads (see > Leveraging local data), which will help improve your HBase random read > profile. Hadoop 2.x also includes important bug fixes that will improve your > overall HBase experience. HBase does not support running with earlier > versions of Hadoop. See the table below for requirements specific to > different HBase versions. > Hadoop 3.x is still in early access releases and has not yet been > sufficiently tested by the HBase community for production use cases. > {quote} > The Hadoop 2.x development is winding down. 2.10.x will be the last dev > branch. On the contrary, we've got years of production users on Hadoop 3.x > already so I think it's time to change our stance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
[ https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16766583#comment-16766583 ] Wei-Chiu Chuang commented on HBASE-21879: - Is this about HDFS-2834 that added DFSInputStream#read(ByteBuffer)? But that was resolved in 2.0.2-alpha > Read HFile's block to ByteBuffer directly instead of to byte for reducing > young gc purpose > -- > > Key: HBASE-21879 > URL: https://issues.apache.org/jira/browse/HBASE-21879 > Project: HBase > Issue Type: Improvement >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4 > > Attachments: QPS-latencies-before-HBASE-21879.png, > gc-data-before-HBASE-21879.png > > > In HFileBlock#readBlockDataInternal, we have the following: > {code} > @VisibleForTesting > protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset, > long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, > boolean updateMetrics) > throws IOException { > // . > // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with > BBPool (offheap). > byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize]; > int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize, > onDiskSizeWithHeader - preReadHeaderSize, true, offset + > preReadHeaderSize, pread); > if (headerBuf != null) { > // ... > } > // ... > } > {code} > In the read path, we still read the block from hfile to on-heap byte[], then > copy the on-heap byte[] to offheap bucket cache asynchronously, and in my > 100% get performance test, I also observed some frequent young gc, The > largest memory footprint in the young gen should be the on-heap block byte[]. > In fact, we can read HFile's block to ByteBuffer directly instead of to > byte[] for reducing young gc purpose. we did not implement this before, > because no ByteBuffer reading interface in the older HDFS client, but 2.7+ > has supported this now, so we can fix this now. I think. > Will provide an patch and some perf-comparison for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
[ https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16767852#comment-16767852 ] Wei-Chiu Chuang commented on HBASE-21879: - Sure [~openinx] would you like to contribute a patch at HDFS-3246? > Read HFile's block to ByteBuffer directly instead of to byte for reducing > young gc purpose > -- > > Key: HBASE-21879 > URL: https://issues.apache.org/jira/browse/HBASE-21879 > Project: HBase > Issue Type: Improvement >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4 > > Attachments: QPS-latencies-before-HBASE-21879.png, > gc-data-before-HBASE-21879.png > > > In HFileBlock#readBlockDataInternal, we have the following: > {code} > @VisibleForTesting > protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset, > long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, > boolean updateMetrics) > throws IOException { > // . > // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with > BBPool (offheap). > byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize]; > int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize, > onDiskSizeWithHeader - preReadHeaderSize, true, offset + > preReadHeaderSize, pread); > if (headerBuf != null) { > // ... > } > // ... > } > {code} > In the read path, we still read the block from hfile to on-heap byte[], then > copy the on-heap byte[] to offheap bucket cache asynchronously, and in my > 100% get performance test, I also observed some frequent young gc, The > largest memory footprint in the young gen should be the on-heap block byte[]. > In fact, we can read HFile's block to ByteBuffer directly instead of to > byte[] for reducing young gc purpose. we did not implement this before, > because no ByteBuffer reading interface in the older HDFS client, but 2.7+ > has supported this now, so we can fix this now. I think. > Will provide an patch and some perf-comparison for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
[ https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16767865#comment-16767865 ] Wei-Chiu Chuang commented on HBASE-21879: - It doesn't look like we'll ever have the next Hadoop 2.7.x release. LinkedIn is one of the big users in this release and they're looking to upgrade to 2.10 soon. I'm pretty sure we can get HDFS-3246 into Hadoop 2.x. It doesn't look like a big change. (Or you can patch Hadoop yourself) > Read HFile's block to ByteBuffer directly instead of to byte for reducing > young gc purpose > -- > > Key: HBASE-21879 > URL: https://issues.apache.org/jira/browse/HBASE-21879 > Project: HBase > Issue Type: Improvement >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4 > > Attachments: QPS-latencies-before-HBASE-21879.png, > gc-data-before-HBASE-21879.png > > > In HFileBlock#readBlockDataInternal, we have the following: > {code} > @VisibleForTesting > protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset, > long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, > boolean updateMetrics) > throws IOException { > // . > // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with > BBPool (offheap). > byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize]; > int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize, > onDiskSizeWithHeader - preReadHeaderSize, true, offset + > preReadHeaderSize, pread); > if (headerBuf != null) { > // ... > } > // ... > } > {code} > In the read path, we still read the block from hfile to on-heap byte[], then > copy the on-heap byte[] to offheap bucket cache asynchronously, and in my > 100% get performance test, I also observed some frequent young gc, The > largest memory footprint in the young gen should be the on-heap block byte[]. > In fact, we can read HFile's block to ByteBuffer directly instead of to > byte[] for reducing young gc purpose. we did not implement this before, > because no ByteBuffer reading interface in the older HDFS client, but 2.7+ > has supported this now, so we can fix this now. I think. > Will provide an patch and some perf-comparison for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
[ https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16767865#comment-16767865 ] Wei-Chiu Chuang edited comment on HBASE-21879 at 2/14/19 4:02 AM: -- It doesn't look like we'll ever have the next Hadoop 2.7.x release. LinkedIn is one of the big users in this release and they're looking to upgrade to 2.10 soon. I'm pretty sure we can get HDFS-3246 into Hadoop 2.x. It doesn't look like a big change. (Or you can patch Hadoop yourself) Regarding the upgrade plan, I can say Hadoop 2.8.x is quite stable, given that Yahoo adopted this release line, and I think they'll stay there for quite a while. You may try Hadoop 2.9 if there's new stuff you need, but other than that I am not hearing any one adopting it. was (Author: jojochuang): It doesn't look like we'll ever have the next Hadoop 2.7.x release. LinkedIn is one of the big users in this release and they're looking to upgrade to 2.10 soon. I'm pretty sure we can get HDFS-3246 into Hadoop 2.x. It doesn't look like a big change. (Or you can patch Hadoop yourself) > Read HFile's block to ByteBuffer directly instead of to byte for reducing > young gc purpose > -- > > Key: HBASE-21879 > URL: https://issues.apache.org/jira/browse/HBASE-21879 > Project: HBase > Issue Type: Improvement >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4 > > Attachments: QPS-latencies-before-HBASE-21879.png, > gc-data-before-HBASE-21879.png > > > In HFileBlock#readBlockDataInternal, we have the following: > {code} > @VisibleForTesting > protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset, > long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, > boolean updateMetrics) > throws IOException { > // . > // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with > BBPool (offheap). > byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize]; > int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize, > onDiskSizeWithHeader - preReadHeaderSize, true, offset + > preReadHeaderSize, pread); > if (headerBuf != null) { > // ... > } > // ... > } > {code} > In the read path, we still read the block from hfile to on-heap byte[], then > copy the on-heap byte[] to offheap bucket cache asynchronously, and in my > 100% get performance test, I also observed some frequent young gc, The > largest memory footprint in the young gen should be the on-heap block byte[]. > In fact, we can read HFile's block to ByteBuffer directly instead of to > byte[] for reducing young gc purpose. we did not implement this before, > because no ByteBuffer reading interface in the older HDFS client, but 2.7+ > has supported this now, so we can fix this now. I think. > Will provide an patch and some perf-comparison for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
[ https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16767865#comment-16767865 ] Wei-Chiu Chuang edited comment on HBASE-21879 at 2/14/19 4:04 AM: -- It doesn't look like we'll ever have the next Hadoop 2.7.x release. LinkedIn is one of the big users in this release and they're looking to upgrade to 2.10 soon. As well as Microsoft. I heard there is a desire to consolidate onto the same release line between LinkedIn and Microsoft. I'm pretty sure we can get HDFS-3246 into Hadoop 2.x. It doesn't look like a big change. (Or you can patch Hadoop yourself) Regarding the upgrade plan, I can say Hadoop 2.8.x is quite stable, given that Yahoo adopted this release line, and I think they'll stay there for quite a while. You may try Hadoop 2.9 if there's new stuff you need, but other than that I am not hearing any one adopting it. was (Author: jojochuang): It doesn't look like we'll ever have the next Hadoop 2.7.x release. LinkedIn is one of the big users in this release and they're looking to upgrade to 2.10 soon. I'm pretty sure we can get HDFS-3246 into Hadoop 2.x. It doesn't look like a big change. (Or you can patch Hadoop yourself) Regarding the upgrade plan, I can say Hadoop 2.8.x is quite stable, given that Yahoo adopted this release line, and I think they'll stay there for quite a while. You may try Hadoop 2.9 if there's new stuff you need, but other than that I am not hearing any one adopting it. > Read HFile's block to ByteBuffer directly instead of to byte for reducing > young gc purpose > -- > > Key: HBASE-21879 > URL: https://issues.apache.org/jira/browse/HBASE-21879 > Project: HBase > Issue Type: Improvement >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.4 > > Attachments: QPS-latencies-before-HBASE-21879.png, > gc-data-before-HBASE-21879.png > > > In HFileBlock#readBlockDataInternal, we have the following: > {code} > @VisibleForTesting > protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset, > long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, > boolean updateMetrics) > throws IOException { > // . > // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with > BBPool (offheap). > byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize]; > int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize, > onDiskSizeWithHeader - preReadHeaderSize, true, offset + > preReadHeaderSize, pread); > if (headerBuf != null) { > // ... > } > // ... > } > {code} > In the read path, we still read the block from hfile to on-heap byte[], then > copy the on-heap byte[] to offheap bucket cache asynchronously, and in my > 100% get performance test, I also observed some frequent young gc, The > largest memory footprint in the young gen should be the on-heap block byte[]. > In fact, we can read HFile's block to ByteBuffer directly instead of to > byte[] for reducing young gc purpose. we did not implement this before, > because no ByteBuffer reading interface in the older HDFS client, but 2.7+ > has supported this now, so we can fix this now. I think. > Will provide an patch and some perf-comparison for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22017) Failed to become active master due to lease 'XXX' does not exist
[ https://issues.apache.org/jira/browse/HBASE-22017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788235#comment-16788235 ] Wei-Chiu Chuang commented on HBASE-22017: - Looks like the Master crashed inside {{AssignmentManager#loadMeta()}} -- loading meta requires scanning the RS. And that's because the RS (hadoop15) was being shutdown. It looks like the RS scan has retry mechanisms, but if the RS was shutdown, it doesn't look like there's anything Master can do but to crash. > Failed to become active master due to lease 'XXX' does not exist > > > Key: HBASE-22017 > URL: https://issues.apache.org/jira/browse/HBASE-22017 > Project: HBase > Issue Type: Bug >Reporter: lujie >Priority: Critical > Attachments: logs.zip > > > {code:java} > 2019-03-06 01:36:17,040 ERROR [master/hadoop11:16000:becomeActiveMaster] > master.HMaster: * ABORTING master hadoop11,16000,1551807353275: Unhandled > exception. Starting shutdown. * > org.apache.hadoop.hbase.regionserver.LeaseException: > org.apache.hadoop.hbase.regionserver.LeaseException: lease > '3449673378019934209' does not exist > at org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:224) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3434) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) > at > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:349) > at > org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:344) > at > org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242) > at > org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58) > at > org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:387) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:361) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107) > at > org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22017) Failed to become active master due to lease 'XXX' does not exist
[ https://issues.apache.org/jira/browse/HBASE-22017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788467#comment-16788467 ] Wei-Chiu Chuang commented on HBASE-22017: - The one that was shutdown had hbase:meta table, which is the critical table and Master can't start without it. > Failed to become active master due to lease 'XXX' does not exist > > > Key: HBASE-22017 > URL: https://issues.apache.org/jira/browse/HBASE-22017 > Project: HBase > Issue Type: Bug >Reporter: lujie >Priority: Critical > Attachments: logs.zip > > > Test cluster: hadoop11(master), hadoop14(slave), haoop15(slave). > before code execute at > org.apache.hadoop.hbase.regionserver.HStore#getScanner(function)#2027(line > number), hadoop15 shutdown, then master startup fails > {code:java} > 2019-03-06 01:36:17,040 ERROR [master/hadoop11:16000:becomeActiveMaster] > master.HMaster: * ABORTING master hadoop11,16000,1551807353275: Unhandled > exception. Starting shutdown. * > org.apache.hadoop.hbase.regionserver.LeaseException: > org.apache.hadoop.hbase.regionserver.LeaseException: lease > '3449673378019934209' does not exist > at org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:224) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3434) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100) > at > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:361) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:349) > at > org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:344) > at > org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242) > at > org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58) > at > org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:387) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:361) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107) > at > org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HBASE-22021) A small refactoring for NettyServerCall.sendResponseIfReady
[ https://issues.apache.org/jira/browse/HBASE-22021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HBASE-22021: --- Assignee: Wei-Chiu Chuang > A small refactoring for NettyServerCall.sendResponseIfReady > --- > > Key: HBASE-22021 > URL: https://issues.apache.org/jira/browse/HBASE-22021 > Project: HBase > Issue Type: Improvement > Components: rpc >Reporter: Zheng Wang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: starter > > before: > connection.channel.writeAndFlush(this); > > after: > connection.doRespond(this); -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22087) Update LICENSE/shading for the latest Hadoop trunk
Wei-Chiu Chuang created HBASE-22087: --- Summary: Update LICENSE/shading for the latest Hadoop trunk Key: HBASE-22087 URL: https://issues.apache.org/jira/browse/HBASE-22087 Project: HBase Issue Type: Bug Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang The following list of dependencies were added in Hadoop trunk (3.3.0) and HBase does not compile successfully: YARN-8778 added jline 3.9.0 HADOOP-15775 added javax.activation HADOOP-15531 added org.apache.common.text (commons-text) HADOOP-15764 added dnsjava (org.xbill) Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the latest Hadoop trunk
[ https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16799127#comment-16799127 ] Wei-Chiu Chuang commented on HBASE-22087: - Should these be shaded within Hadoop itself? I was under the impression that if these are leaked through Hadoop, Hadoop wouldn't build? Hmm if that's the case the only thing needed at HBase side is update LICENSE for jline due to YARN-8778. > Update LICENSE/shading for the latest Hadoop trunk > -- > > Key: HBASE-22087 > URL: https://issues.apache.org/jira/browse/HBASE-22087 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > The following list of dependencies were added in Hadoop trunk (3.3.0) and > HBase does not compile successfully: > YARN-8778 added jline 3.9.0 > HADOOP-15775 added javax.activation > HADOOP-15531 added org.apache.common.text (commons-text) > HADOOP-15764 added dnsjava (org.xbill) > Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22103) HDFS-13209 in Hadoop 3.3.0 breaks asyncwal
Wei-Chiu Chuang created HBASE-22103: --- Summary: HDFS-13209 in Hadoop 3.3.0 breaks asyncwal Key: HBASE-22103 URL: https://issues.apache.org/jira/browse/HBASE-22103 Project: HBase Issue Type: Bug Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} and broke asyncfs. {noformat} 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize access to HDFS internals. Please update your WAL Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more information. java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, org.apache.hadoop.fs.permission.FsPermission, java.lang.String, org.apache.hadoop.io.EnumSetWritable, boolean, short, long, [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) at org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.newWAL(TestAsyncFSWAL.java:72) at org.apache.hadoop.hbase.regionserver.wal.AbstractTestFSWAL.testWALComparator(AbstractTestFSWAL.java:194) {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22103) HDFS-13209 in Hadoop 3.3.0 breaks asyncwal
[ https://issues.apache.org/jira/browse/HBASE-22103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-22103: Attachment: HBASE-22103.master.001.patch > HDFS-13209 in Hadoop 3.3.0 breaks asyncwal > -- > > Key: HBASE-22103 > URL: https://issues.apache.org/jira/browse/HBASE-22103 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-22103.master.001.patch > > > HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} > and broke asyncfs. > {noformat} > 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] > asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize > access to HDFS internals. Please update your WAL Provider to not make use of > the 'asyncfs' provider. See HBASE-16110 for more information. > java.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, > org.apache.hadoop.fs.permission.FsPermission, java.lang.String, > org.apache.hadoop.io.EnumSetWritable, boolean, short, long, > [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) > at java.lang.Class.getMethod(Class.java:1786) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) > at > org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) > at > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) > at > org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.newWAL(TestAsyncFSWAL.java:72) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractTestFSWAL.testWALComparator(AbstractTestFSWAL.java:194) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22103) HDFS-13209 in Hadoop 3.3.0 breaks asyncwal
[ https://issues.apache.org/jira/browse/HBASE-22103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-22103: Description: HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} and broke asyncfs. {noformat} 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize access to HDFS internals. Please update your WAL Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more information. java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, org.apache.hadoop.fs.permission.FsPermission, java.lang.String, org.apache.hadoop.io.EnumSetWritable, boolean, short, long, [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) at org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.newWAL(TestAsyncFSWAL.java:72) at org.apache.hadoop.hbase.regionserver.wal.AbstractTestFSWAL.testWALComparator(AbstractTestFSWAL.java:194) {noformat} Credit: this bug was found by [~gabor.bota] was: HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} and broke asyncfs. {noformat} 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize access to HDFS internals. Please update your WAL Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more information. java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, org.apache.hadoop.fs.permission.FsPermission, java.lang.String, org.apache.hadoop.io.EnumSetWritable, boolean, short, long, [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) at org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.
[jira] [Updated] (HBASE-22103) HDFS-13209 in Hadoop 3.3.0 breaks asyncwal
[ https://issues.apache.org/jira/browse/HBASE-22103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-22103: Description: HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} and broke asyncwal. {noformat} 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize access to HDFS internals. Please update your WAL Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more information. java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, org.apache.hadoop.fs.permission.FsPermission, java.lang.String, org.apache.hadoop.io.EnumSetWritable, boolean, short, long, [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) at org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.newWAL(TestAsyncFSWAL.java:72) at org.apache.hadoop.hbase.regionserver.wal.AbstractTestFSWAL.testWALComparator(AbstractTestFSWAL.java:194) {noformat} Credit: this bug was found by [~gabor.bota] was: HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} and broke asyncfs. {noformat} 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize access to HDFS internals. Please update your WAL Provider to not make use of the 'asyncfs' provider. See HBASE-16110 for more information. java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, org.apache.hadoop.fs.permission.FsPermission, java.lang.String, org.apache.hadoop.io.EnumSetWritable, boolean, short, long, [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) at org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL
[jira] [Commented] (HBASE-22103) HDFS-13209 in Hadoop 3.3.0 breaks asyncwal
[ https://issues.apache.org/jira/browse/HBASE-22103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800697#comment-16800697 ] Wei-Chiu Chuang commented on HBASE-22103: - [~Apache9] certainly. Supporting Hadoop 3.3 is not my priority now. Gabor is updating Guava in Hadoop and found this issue. > HDFS-13209 in Hadoop 3.3.0 breaks asyncwal > -- > > Key: HBASE-22103 > URL: https://issues.apache.org/jira/browse/HBASE-22103 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-22103.master.001.patch > > > HDFS-13209 added an additional parameter to {{DistributedFileSystem.create}} > and broke asyncwal. > {noformat} > 2019-03-25 12:19:21,061 ERROR [Listener at localhost/54758] > asyncfs.FanOutOneBlockAsyncDFSOutputHelper(562): Couldn't properly initialize > access to HDFS internals. Please update your WAL Provider to not make use of > the 'asyncfs' provider. See HBASE-16110 for more information. > java.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, > org.apache.hadoop.fs.permission.FsPermission, java.lang.String, > org.apache.hadoop.io.EnumSetWritable, boolean, short, long, > [Lorg.apache.hadoop.crypto.CryptoProtocolVersion;) > at java.lang.Class.getMethod(Class.java:1786) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator2(FanOutOneBlockAsyncDFSOutputHelper.java:513) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createFileCreator(FanOutOneBlockAsyncDFSOutputHelper.java:530) > at > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:557) > at > org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:51) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:169) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166) > at > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:105) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createAsyncWriter(AsyncFSWAL.java:663) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) > at > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:126) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:813) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:519) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:460) > at > org.apache.hadoop.hbase.regionserver.wal.TestAsyncFSWAL.newWAL(TestAsyncFSWAL.java:72) > at > org.apache.hadoop.hbase.regionserver.wal.AbstractTestFSWAL.testWALComparator(AbstractTestFSWAL.java:194) > {noformat} > Credit: this bug was found by [~gabor.bota] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the latest Hadoop trunk
[ https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801629#comment-16801629 ] Wei-Chiu Chuang commented on HBASE-22087: - [~busbey] Please help me understand this. All of them are shaded in hadoop-client-runtime jar. But it seems to be HBase uses non-client Hadoop artifacts so they get pulled in without shading. * org.jline hbase-http uses hadoop-mapreduce-client-core, which uses hadoop-yarn-client which uses org.jline hadoop-client-runtime-3.3.0-SNAPSHOT.jar has the shaded classes though * javax.activation hbase-common uses hadoop-common which uses javax.activation. hadoop-client-runtime-3.3.0-SNAPSHOT.jar has the shaded classes though * org.apache.common.text (commons-text) hbase-common uses hadoop-common which uses commons-text hadoop-client-runtime-3.3.0-SNAPSHOT.jar has the shaded classes though * org.xbill (dnsjava) hbase-common uses hadoop-common which uses dnsjava hadoop-client-runtime-3.3.0-SNAPSHOT.jar has the shaded classes though According to [Apache Hadoop Downstream Developer’s Guide|https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/DownstreamDev.html#Build_Artifacts], hadoop-yarn-client/hadoop-mapreduce-client-core are client artifacts as well. Does that mean jline should be shaded within hadoop-yarn-client/hadoop-mapreduce-client-core? > Update LICENSE/shading for the latest Hadoop trunk > -- > > Key: HBASE-22087 > URL: https://issues.apache.org/jira/browse/HBASE-22087 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > The following list of dependencies were added in Hadoop trunk (3.3.0) and > HBase does not compile successfully: > YARN-8778 added jline 3.9.0 > HADOOP-15775 added javax.activation > HADOOP-15531 added org.apache.common.text (commons-text) > HADOOP-15764 added dnsjava (org.xbill) > Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22087) Update LICENSE/shading for the latest Hadoop trunk
[ https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-22087: Attachment: depcheck_hadoop33.log > Update LICENSE/shading for the latest Hadoop trunk > -- > > Key: HBASE-22087 > URL: https://issues.apache.org/jira/browse/HBASE-22087 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: depcheck_hadoop33.log > > > The following list of dependencies were added in Hadoop trunk (3.3.0) and > HBase does not compile successfully: > YARN-8778 added jline 3.9.0 > HADOOP-15775 added javax.activation > HADOOP-15531 added org.apache.common.text (commons-text) > HADOOP-15764 added dnsjava (org.xbill) > Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the latest Hadoop trunk
[ https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801651#comment-16801651 ] Wei-Chiu Chuang commented on HBASE-22087: - Attach the dependency:tree output [^depcheck_hadoop33.log] {noformat} mvn dependency:tree -Dhadoop.profile=3.0 -Dhadoop-three.version=3.3.0-SNAPSHOT -Dmaven.javadoc.skip=true -DskipTests {noformat} > Update LICENSE/shading for the latest Hadoop trunk > -- > > Key: HBASE-22087 > URL: https://issues.apache.org/jira/browse/HBASE-22087 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: depcheck_hadoop33.log > > > The following list of dependencies were added in Hadoop trunk (3.3.0) and > HBase does not compile successfully: > YARN-8778 added jline 3.9.0 > HADOOP-15775 added javax.activation > HADOOP-15531 added org.apache.common.text (commons-text) > HADOOP-15764 added dnsjava (org.xbill) > Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk
[ https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-22087: Summary: Update LICENSE/shading for the dependencies from the latest Hadoop trunk (was: Update LICENSE/shading for the latest Hadoop trunk) > Update LICENSE/shading for the dependencies from the latest Hadoop trunk > > > Key: HBASE-22087 > URL: https://issues.apache.org/jira/browse/HBASE-22087 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: depcheck_hadoop33.log > > > The following list of dependencies were added in Hadoop trunk (3.3.0) and > HBase does not compile successfully: > YARN-8778 added jline 3.9.0 > HADOOP-15775 added javax.activation > HADOOP-15531 added org.apache.common.text (commons-text) > HADOOP-15764 added dnsjava (org.xbill) > Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-22087) Update LICENSE/shading for the dependencies from the latest Hadoop trunk
[ https://issues.apache.org/jira/browse/HBASE-22087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802966#comment-16802966 ] Wei-Chiu Chuang commented on HBASE-22087: - Submitted a patch for precommit check. bq. This is concerning, as relocating and including javax.activation is probably not correct behavior. This was explicitly included in Hadoop because of JDK9 support. I excluded javax.activation from HBase. This should be consistent with the handling of javax.annotation. Updated org.jline LICENSE. Shaded the rest of dependencies (org.jline, commons-text, dnsjava) Excluded some classes/files unused. > Update LICENSE/shading for the dependencies from the latest Hadoop trunk > > > Key: HBASE-22087 > URL: https://issues.apache.org/jira/browse/HBASE-22087 > Project: HBase > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-22087.master.001.patch, depcheck_hadoop33.log > > > The following list of dependencies were added in Hadoop trunk (3.3.0) and > HBase does not compile successfully: > YARN-8778 added jline 3.9.0 > HADOOP-15775 added javax.activation > HADOOP-15531 added org.apache.common.text (commons-text) > HADOOP-15764 added dnsjava (org.xbill) > Some of these are needed to support JDK9/10/11 in Hadoop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)