[jira] [Resolved] (HBASE-28152) Replace scala.util.parsing.json with org.json4s.jackson which used in Spark too
[ https://issues.apache.org/jira/browse/HBASE-28152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-28152. - Fix Version/s: hbase-connectors-1.0.1 Resolution: Fixed > Replace scala.util.parsing.json with org.json4s.jackson which used in Spark > too > --- > > Key: HBASE-28152 > URL: https://issues.apache.org/jira/browse/HBASE-28152 > Project: HBase > Issue Type: New Feature > Components: spark >Affects Versions: connector-1.0.0 >Reporter: Attila Zsolt Piros >Assignee: Attila Zsolt Piros >Priority: Major > Fix For: hbase-connectors-1.0.1 > > > In https://issues.apache.org/jira/browse/HBASE-28137 to support Spark 3.4 the > scala-parser-combinators was added as direct dependency to HBase Spark > Connector. > This was needed as Spark 3.4 is not using scala-parser-combinators and it is > not inherited as transitive dependency. > But this solution has a disadvantage. As the HBase Spark Connector assembly > jar does not include any 3rd party libraries the scala-parser-combinators > must be added to the spark classpath for HBase Spark Connector to work. > A much better solution is to replace scala.util.parsing.json with > org.json4s.jackson which is used by Spark core, see > https://github.com/apache/spark/blob/branch-3.4/core/pom.xml#L279-L280. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-28247) Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags
[ https://issues.apache.org/jira/browse/HBASE-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros reopened HBASE-28247: - > Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM > test flags > > > Key: HBASE-28247 > URL: https://issues.apache.org/jira/browse/HBASE-28247 > Project: HBase > Issue Type: Bug > Components: java >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: 2.6.0, 3.0.0-beta-1, 2.5.7 > > > While testing with JDK17 we have found that we need to add > {noformat} > --add-exports java.base/sun.net.dns=ALL-UNNAMED > --add-exports java.base/sun.net.util=ALL-UNNAMED > {noformat} > on top of what is already defined in _hbase-surefire.jdk11.flags_ , otherwise > RS and Master startup fails in the Hadoop security code. > While this does not affect the test suite (at least not the commonly run > tests), I consider hbase-surefire.jdk11.flags to be an unoffical resource to > getting HBase to run on newer JDK versions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28247) Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM test flags
[ https://issues.apache.org/jira/browse/HBASE-28247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-28247. - Resolution: Fixed > Add java.base/sun.net.dns and java.base/sun.net.util export to jdk11 JVM > test flags > > > Key: HBASE-28247 > URL: https://issues.apache.org/jira/browse/HBASE-28247 > Project: HBase > Issue Type: Bug > Components: java >Affects Versions: 2.6.0, 3.0.0-alpha-4, 2.4.17, 2.5.6 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.7 > > > While testing with JDK17 we have found that we need to add > {noformat} > --add-exports java.base/sun.net.dns=ALL-UNNAMED > --add-exports java.base/sun.net.util=ALL-UNNAMED > {noformat} > on top of what is already defined in _hbase-surefire.jdk11.flags_ , otherwise > RS and Master startup fails in the Hadoop security code. > While this does not affect the test suite (at least not the commonly run > tests), I consider hbase-surefire.jdk11.flags to be an unoffical resource to > getting HBase to run on newer JDK versions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-20562) [UMBRELLA] Use parameterized logging
[ https://issues.apache.org/jira/browse/HBASE-20562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-20562. - Resolution: Won't Fix > [UMBRELLA] Use parameterized logging > > > Key: HBASE-20562 > URL: https://issues.apache.org/jira/browse/HBASE-20562 > Project: HBase > Issue Type: Umbrella >Affects Versions: 3.0.0-alpha-1 >Reporter: Balazs Meszaros >Priority: Major > > We should use parameterized log message feature of slf4j/logback. (Use {} > instead of string concatenation.) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28254) Flaky test: TestTableShell
[ https://issues.apache.org/jira/browse/HBASE-28254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-28254. - Fix Version/s: 2.6.0 2.4.18 3.0.0-beta-1 2.5.8 Resolution: Fixed > Flaky test: TestTableShell > -- > > Key: HBASE-28254 > URL: https://issues.apache.org/jira/browse/HBASE-28254 > Project: HBase > Issue Type: Test > Components: flakies, integration tests >Reporter: Andor Molnar >Assignee: Andor Molnar >Priority: Major > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.8 > > > The test is running the following Ruby commands: > {noformat} > # Instert test data > @test_table.put(1, "x:a", 1) > @test_table.put(2, "x:raw1", 11) > @test_table.put(2, "x:raw1", 11) > @test_table.put(2, "x:raw1", 11) > @test_table.put(2, "x:raw1", 11) > {noformat} > And validate the versions with: > {noformat} > args = { VERSIONS => 10, RAW => true } # Since 4 versions of row with rowkey > 2 is been added, we can use any number >= 4 for VERSIONS to scan all 4 > versions. > num_rows = 0 > @test_table._scan_internal(args) do # Raw Scan > num_rows += 1 > end > # 5 since , 1 from row key '1' and other 4 from row key '4' > assert_equal(num_rows, 5, > 'Num rows scanned without RAW/VERSIONS are not 5') > {noformat} > Which sometimes (almost always on fast machines) fails, because it only finds > 3 versions out of 4. I believe this due to commands are running too fast and > inserts data with same timestamp, so HBase cannot distinguish them. I'd like > to add some sleep between puts to fix it. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28274) Flaky test: TestFanOutOneBlockAsyncDFSOutput (Part 2)
[ https://issues.apache.org/jira/browse/HBASE-28274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-28274. - Fix Version/s: 2.6.0 2.4.18 3.0.0-beta-1 2.5.8 Resolution: Fixed > Flaky test: TestFanOutOneBlockAsyncDFSOutput (Part 2) > - > > Key: HBASE-28274 > URL: https://issues.apache.org/jira/browse/HBASE-28274 > Project: HBase > Issue Type: Test > Components: flakies, integration tests, test >Reporter: Andor Molnar >Assignee: Andor Molnar >Priority: Major > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-1, 2.5.8 > > > The following test sometimes fails for me when running locally with Maven: > TestFanOutOneBlockAsyncDFSOutput.testRecover() > I can't really figure out the reason, but it's probably a side effect of the > preceding test: testConnectToDatanodeFailed(). This test also restarts one of > the datanodes in the MiniDFS cluster just like testRecover() and it somehow > causes the failure. > {noformat} > java.lang.AssertionError: flush should fail > at org.junit.Assert.fail(Assert.java:89) > at > org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput.testRecover(TestFanOutOneBlockAsyncDFSOutput.java:154) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){noformat} > The flush() call is testRecover() should fail, because we restart one of the > DN in the DFS cluster which is expected to brake connection. It succeeds > though if the preceding test already restarted a DN. No matter which DN we > restart, even if they're different, the error occurs. > I also tried to add CLUSTER.waitDatanodeFullyStarted() at the end of > testConnectToDatanodeFailed(), looks like it made the tests slightly more > stable, but didn't help fully. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28345) Close HBase connection on exit from HBase Shell
[ https://issues.apache.org/jira/browse/HBASE-28345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-28345. - Resolution: Fixed [~bbeaudreault] Thanks, fixed. > Close HBase connection on exit from HBase Shell > --- > > Key: HBASE-28345 > URL: https://issues.apache.org/jira/browse/HBASE-28345 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 2.4.17 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 3.0.0, 2.5.8, 3.0.0-beta-2 > > > When using Netty for the ZK client, hbase shell hangs on exit. > This is caused by the non-deamon Netty threads that ZK creates. > Wheter ZK should create daemon threads for Netty or not is debatable, but > explicitly closing the connection in hbase shell on exit fixes the issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-17908) Upgrade guava
Balazs Meszaros created HBASE-17908: --- Summary: Upgrade guava Key: HBASE-17908 URL: https://issues.apache.org/jira/browse/HBASE-17908 Project: HBase Issue Type: Sub-task Reporter: Balazs Meszaros Currently we are using guava 12.0.1, but the latest version is 21.0. Upgrading guava is always a hassle because it is not always backward compatible with itself. Currently I think there are to approaches: 1. Upgrade guava to the newest version (21.0) and shade it. 2. Upgrade guava to a version which does not break or builds (15.0). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18096) Limit HFileUtil visibility and add missing annotations
Balazs Meszaros created HBASE-18096: --- Summary: Limit HFileUtil visibility and add missing annotations Key: HBASE-18096 URL: https://issues.apache.org/jira/browse/HBASE-18096 Project: HBase Issue Type: Task Reporter: Balazs Meszaros Assignee: Balazs Meszaros HFileUtil should be package private and should have @InterfaceAudience.Private annotation. This class was introduced in HBASE-17501. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18185) IntegrationTestTimeBoundedRequestsWithRegionReplicas unbalanced tests fails with AssertionError
Balazs Meszaros created HBASE-18185: --- Summary: IntegrationTestTimeBoundedRequestsWithRegionReplicas unbalanced tests fails with AssertionError Key: HBASE-18185 URL: https://issues.apache.org/jira/browse/HBASE-18185 Project: HBase Issue Type: Bug Components: integration tests Affects Versions: 2.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Priority: Minor We got the following error: Exception in thread "main" java.lang.AssertionError: Verification failed with error code 1 at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hbase.test.IntegrationTestTimeBoundedRequestsWithRegionReplicas.runIngestTest(IntegrationTestTimeBoundedRequestsWithRegionReplicas.java:217) at org.apache.hadoop.hbase.IntegrationTestIngest.internalRunIngestTest(IntegrationTestIngest.java:123) at org.apache.hadoop.hbase.IntegrationTestIngest.runTestFromCommandLine(IntegrationTestIngest.java:106) at org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:123) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.test.IntegrationTestTimeBoundedRequestsWithRegionReplicas.main(IntegrationTestTimeBoundedRequestsWithRegionReplicas.java:362) The reason why we got it because another assertion fails in UnbalanceKillAndRebalanceAction: Exception in thread "Thread-57" java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hadoop.hbase.chaos.actions.UnbalanceKillAndRebalanceAction.perform(UnbalanceKillAndRebalanceAction.java:60) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18224) Upgrate jetty and thrift
Balazs Meszaros created HBASE-18224: --- Summary: Upgrate jetty and thrift Key: HBASE-18224 URL: https://issues.apache.org/jira/browse/HBASE-18224 Project: HBase Issue Type: Sub-task Reporter: Balazs Meszaros Jetty can be updated to 9.4.6 and thrift can be updated to 0.10.0. I tried to update them in HBASE-17898 but some unit tests failed, so created a sub-task for them. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18367) Reduce ProcedureInfo usage
Balazs Meszaros created HBASE-18367: --- Summary: Reduce ProcedureInfo usage Key: HBASE-18367 URL: https://issues.apache.org/jira/browse/HBASE-18367 Project: HBase Issue Type: Sub-task Reporter: Balazs Meszaros Assignee: Balazs Meszaros If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have to reduce ProcedureInfo usage. Currently it is used several places in the code where it could be replaced with Procedure (e.g. ProcedureExecutor). We should use ProcedureInfo only for the communication before removing it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18705) bin/hbase does not find cached_classpath.txt
Balazs Meszaros created HBASE-18705: --- Summary: bin/hbase does not find cached_classpath.txt Key: HBASE-18705 URL: https://issues.apache.org/jira/browse/HBASE-18705 Project: HBase Issue Type: Task Affects Versions: 3.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Starting hbase with {{bin/start-hbase.sh}} reports the following error: {code:none} As this is a development environment, we need /.../hbase/bin/../target/cached_classpath.txt to be generated from maven (command: mvn install -DskipTests) {code} The {{cached_classpath.txt}} is generated by hbase-assembly but it saves that file in {{$project.parent.basedir/...}}. Since hbase-build-configuration is the parent of hbase-assembly, the script searches the file at the wrong place. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18706) Remove / fix not working pages from backup master web UI
Balazs Meszaros created HBASE-18706: --- Summary: Remove / fix not working pages from backup master web UI Key: HBASE-18706 URL: https://issues.apache.org/jira/browse/HBASE-18706 Project: HBase Issue Type: Bug Reporter: Balazs Meszaros When I connect to a locally started backup master web interface, I have the following issues: 1. {{/tablesDetailed.jsp}} times out, we should remove the links from the menu, or we should connect to the active master. 2. {{/procedures.jsp}} throws an exception. The link to it is missing from {{/master-status}}, but it is visible under Process Metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18805) Unify Admin and AsyncAdmin
Balazs Meszaros created HBASE-18805: --- Summary: Unify Admin and AsyncAdmin Key: HBASE-18805 URL: https://issues.apache.org/jira/browse/HBASE-18805 Project: HBase Issue Type: Sub-task Reporter: Balazs Meszaros Admin and AsyncAdmin differ some places: - some methods missing from AsyncAdmin (e.g. methods with String regex), - some methods have different names (listTables vs listTableDescriptors), - some method parameters are different (e.g. AsyncAdmin has Optional<> parameters), - AsyncAdmin returns Lists instead of arrays (e.g. listTableNames), - unify Javadoc comments, - ... -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19328) Remove asked if splittable log messages
Balazs Meszaros created HBASE-19328: --- Summary: Remove asked if splittable log messages Key: HBASE-19328 URL: https://issues.apache.org/jira/browse/HBASE-19328 Project: HBase Issue Type: Task Components: proc-v2 Affects Versions: 3.0.0 Reporter: Balazs Meszaros Priority: Minor I have found this log message in HBase log: {code} 2017-11-22 11:16:54,133 INFO [RpcServer.priority.FPBQ.Fifo.handler=5,queue=0,port=52586] regionserver.HRegion(1309): ASKED IF SPLITTABLE true 0a66d6e20801eec2c6cd1204fedde592 java.lang.Throwable: LOGGING: REMOVE at org.apache.hadoop.hbase.regionserver.HRegion.isSplittable(HRegion.java:1310) at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1665) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:28159) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:325) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:305) {code} Still we need this? It was introduced in commit {{dc1065a85}} by [~stack] and [~mbertozzi]. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19400) Add missing security hooks for MasterService RPCs
Balazs Meszaros created HBASE-19400: --- Summary: Add missing security hooks for MasterService RPCs Key: HBASE-19400 URL: https://issues.apache.org/jira/browse/HBASE-19400 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0-beta-1 Reporter: Balazs Meszaros The following RPC methods do not call the observers, therefore they are not guarded by AccessController: - normalize - setNormalizerRunning - runCatalogScan - enableCatalogJanitor - runCleanerChore - setCleanerChoreRunning - execMasterService - execProcedure - execProcedureWithRet -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19401) Add missing security hooks for ClientService RPCs
Balazs Meszaros created HBASE-19401: --- Summary: Add missing security hooks for ClientService RPCs Key: HBASE-19401 URL: https://issues.apache.org/jira/browse/HBASE-19401 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0-beta-1 Reporter: Balazs Meszaros The following RPC method does not call the observers, therefore it is not guarded by AccessController: - execRegionServerService -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19402) Add missing security hooks for RegionServerStatusService RPCs
Balazs Meszaros created HBASE-19402: --- Summary: Add missing security hooks for RegionServerStatusService RPCs Key: HBASE-19402 URL: https://issues.apache.org/jira/browse/HBASE-19402 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0-beta-1 Reporter: Balazs Meszaros The following RPC methods do not call the observers, therefore they are not guarded by AccessController: - regionServerStartup - regionServerReport - reportRSFatalError - reportRegionStateTransition - reportRegionSpaceUse -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19403) Add missing security hooks for AdminService RPCs
Balazs Meszaros created HBASE-19403: --- Summary: Add missing security hooks for AdminService RPCs Key: HBASE-19403 URL: https://issues.apache.org/jira/browse/HBASE-19403 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0-beta-1 Reporter: Balazs Meszaros The following RPC methods do not call the observers, therefore they are not guarded by AccessController: - updateConfiguration - replay - warmupRegion - updateFavoredNodes - clearRegionBlockCache -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19598) Fix TestAssignmentManagerMetrics flaky test
Balazs Meszaros created HBASE-19598: --- Summary: Fix TestAssignmentManagerMetrics flaky test Key: HBASE-19598 URL: https://issues.apache.org/jira/browse/HBASE-19598 Project: HBase Issue Type: Bug Affects Versions: 2.0.0-beta-1 Reporter: Balazs Meszaros Assignee: Balazs Meszaros TestAssignmentManagerMetrics fails constantly. After bisecting, it seems that commit 010012cbcb broke it (HBASE-18946). The test method runs successfully, but it cannot shut the minicluster down, and hangs forever. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19886) Display maintenance mode in shell, web UI, JMX
Balazs Meszaros created HBASE-19886: --- Summary: Display maintenance mode in shell, web UI, JMX Key: HBASE-19886 URL: https://issues.apache.org/jira/browse/HBASE-19886 Project: HBase Issue Type: New Feature Affects Versions: 2.0.0, 3.0.0, 1.4.2 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Maintenance mode was introduced in HBASE-16008. This mode is controlled by hbck. Splitting an balancing is disabled in this mode. It would be useful to present this information to users through shell, web UI, JMX. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-19854) Add a link to Ref Guide PDF
[ https://issues.apache.org/jira/browse/HBASE-19854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-19854. - Resolution: Won't Fix It is also possible to find it through the search box if I search for "reference guide". I think we can close this. > Add a link to Ref Guide PDF > --- > > Key: HBASE-19854 > URL: https://issues.apache.org/jira/browse/HBASE-19854 > Project: HBase > Issue Type: Bug > Components: documentation >Reporter: Laxmi Narsimha Rao Oruganti >Priority: Minor > > Many a times, users want to have an offline copy of Ref Guide. Some people > prefer to save HTML, and some people prefer it in PDF format. Hence, Apache > HBase team generates PDF version of document periodically and keeps it > available at: [https://hbase.apache.org/apache_hbase_reference_guide.pdf] > It would be good if a link to this URL is available in the online guide so > that users would become aware that there is a PDF version. Right now, unless > some one explicitly looks for it using Google/Bing search, they would not > know. > > As the PDF URL is fixed for latest documentation, it can be a static href. > However, I don't have any clues about how to make sure to get > "version-relevant" PDF link for archived ref guides. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20200) list_procedures fails in shell
Balazs Meszaros created HBASE-20200: --- Summary: list_procedures fails in shell Key: HBASE-20200 URL: https://issues.apache.org/jira/browse/HBASE-20200 Project: HBase Issue Type: Bug Components: shell Affects Versions: 2.0.0-beta-2 Reporter: Balazs Meszaros Assignee: Balazs Meszaros {code} hbase(main):002:0> list_procedures Id Name State Submitted_Time Last_Update Parameters ERROR: undefined method `slice' for # {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20245) HTrace commands do not work
Balazs Meszaros created HBASE-20245: --- Summary: HTrace commands do not work Key: HBASE-20245 URL: https://issues.apache.org/jira/browse/HBASE-20245 Project: HBase Issue Type: Sub-task Reporter: Balazs Meszaros When running shell-2.0 against server-2.0 we get the following error: {code} hbase(main):034:0> trace 'start' ERROR: undefined method `isTracing' for Java::OrgApacheHtraceCore::Tracer:Class {code} It is possible to manipulate tracing from shell-1.2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20258) Shell hangs when scanning a disabled table
Balazs Meszaros created HBASE-20258: --- Summary: Shell hangs when scanning a disabled table Key: HBASE-20258 URL: https://issues.apache.org/jira/browse/HBASE-20258 Project: HBase Issue Type: Sub-task Reporter: Balazs Meszaros I executed the following commands against a 2.0 server: {code} disable 't' scan 't' {code} >From client-1.2: it throws an error because the table is disabled -> OK. >From client-2.0: the shell hangs -> NOT OK. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20343) [DOC] fix log directory paths
Balazs Meszaros created HBASE-20343: --- Summary: [DOC] fix log directory paths Key: HBASE-20343 URL: https://issues.apache.org/jira/browse/HBASE-20343 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 3.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros The documentation refers to the log directories as {{.logs}}, {{splitlog}}, etc. These references should be changed to {{WALs}}, {{splitWAL}}, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20358) Fix bin/hbase thrift usage text
Balazs Meszaros created HBASE-20358: --- Summary: Fix bin/hbase thrift usage text Key: HBASE-20358 URL: https://issues.apache.org/jira/browse/HBASE-20358 Project: HBase Issue Type: Bug Reporter: Balazs Meszaros Assignee: Balazs Meszaros 1. Thrift server usage text does not say anything about it requires a {{start}} or {{stop}} argument. 2. It ignores {{stop}} argument, it acts as the same as for {{start}}. According to this comment: {code:java} // This is so complicated to please both bin/hbase and bin/hbase-daemon. // hbase-daemon provides "start" and "stop" arguments // hbase should print the help if no argument is provided {code} {{start}} and {{stop}} is just supported because {{bin/hbase-daemon}}, but hbase-daemon kills the process instead of calling it with a {{stop}} argument. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20386) [DOC] Align WALPlayer help text and refguide
Balazs Meszaros created HBASE-20386: --- Summary: [DOC] Align WALPlayer help text and refguide Key: HBASE-20386 URL: https://issues.apache.org/jira/browse/HBASE-20386 Project: HBase Issue Type: Task Components: documentation Affects Versions: 2.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20398) Redirect doesn't work on web UI
Balazs Meszaros created HBASE-20398: --- Summary: Redirect doesn't work on web UI Key: HBASE-20398 URL: https://issues.apache.org/jira/browse/HBASE-20398 Project: HBase Issue Type: Bug Components: UI Affects Versions: 3.0.0, 2.0.0 Reporter: Balazs Meszaros {{table.jsp}} contains _"Go Back, or wait for the redirect."_ string after we invoke compaction, split, etc. Previously it redirected to the previous page after 5 seconds. The string is there currently, but nothing happens. After digging into the code, a found the following: - {{ecce7c2}} (HBASE-3948) it was introduced, - {{3bd9191}} (HBASE-9850) it was refactored, - {{3b2b22b}} (HBASE-19291) it was removed. This string appears on {{rsgroup.jsp}}, {{snapshot.jsp}} and {{table.jsp}}. We should fix them by removing the string, or by adding redirection again. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20399) Fix merge layout
Balazs Meszaros created HBASE-20399: --- Summary: Fix merge layout Key: HBASE-20399 URL: https://issues.apache.org/jira/browse/HBASE-20399 Project: HBase Issue Type: Bug Components: UI Affects Versions: 3.0.0, 2.0.0 Reporter: Balazs Meszaros Attachments: merge.png Merge regions on {{table.jsp}} has wrong layout (see screenshot). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20427) thrift.jsp displays "Framed transport" incorrectly
Balazs Meszaros created HBASE-20427: --- Summary: thrift.jsp displays "Framed transport" incorrectly Key: HBASE-20427 URL: https://issues.apache.org/jira/browse/HBASE-20427 Project: HBase Issue Type: Bug Components: Thrift Affects Versions: 2.0.0 Reporter: Balazs Meszaros Fix For: 3.0.0, 2.0.0 According to thrift usage text: {code} -nonblocking Use the TNonblockingServer This implies the framed transport. {code} But the web page at port 9095 indicates {{framed = false}} when I start it with {{-nonblocking}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20465) Fix TestEnableRSGroup flaky
Balazs Meszaros created HBASE-20465: --- Summary: Fix TestEnableRSGroup flaky Key: HBASE-20465 URL: https://issues.apache.org/jira/browse/HBASE-20465 Project: HBase Issue Type: Bug Components: rsgroup Affects Versions: 2.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Fix For: 2.0.0 Most recent {{TestEnableRSGroup}} tests failed on branch-2.0 according to our flaky dashboard. {code} java.lang.AssertionError at org.apache.hadoop.hbase.rsgroup.TestEnableRSGroup.testEnableRSGroup(TestEnableRSGroup.java:80) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20487) Sorting table regions by region name does not work on web UI
Balazs Meszaros created HBASE-20487: --- Summary: Sorting table regions by region name does not work on web UI Key: HBASE-20487 URL: https://issues.apache.org/jira/browse/HBASE-20487 Project: HBase Issue Type: Bug Components: UI Affects Versions: 3.0.0 Reporter: Balazs Meszaros Table regions on {{table.jsp}} cannot be sorted by Name column. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-15262) TestZooKeeperMainServer#testCommandLineWorks fails with CancelledKeyException
[ https://issues.apache.org/jira/browse/HBASE-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-15262. - Resolution: Won't Fix Release Note: This unit test does not exist anymore. > TestZooKeeperMainServer#testCommandLineWorks fails with CancelledKeyException > - > > Key: HBASE-15262 > URL: https://issues.apache.org/jira/browse/HBASE-15262 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Priority: Major > > {code} > 2016-02-11 11:34:50,418 ERROR [SyncThread:0] server.NIOServerCnxn(178): > Unexpected Exception: > java.nio.channels.CancelledKeyException > at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) > at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:77) > at > org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:151) > at > org.apache.zookeeper.server.ZooKeeperServer.finishSessionInit(ZooKeeperServer.java:607) > at > org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:181) > at > org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:200) > at > org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:131) > [code} > The test is expecting zk to do a System.exit but an existing zk or previous > test is getting us this CancelledKeyException... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-15307) TestFailedAppendAndSync.testLockupAroundBadAssignSync is flakey
[ https://issues.apache.org/jira/browse/HBASE-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-15307. - Resolution: Won't Fix This unit test does not exist anymore. > TestFailedAppendAndSync.testLockupAroundBadAssignSync is flakey > --- > > Key: HBASE-15307 > URL: https://issues.apache.org/jira/browse/HBASE-15307 > Project: HBase > Issue Type: Sub-task > Components: flakey, test >Reporter: stack >Assignee: stack >Priority: Critical > > {code} > Error Message > test timed out after 30 milliseconds > Stacktrace > org.junit.runners.model.TestTimedOutException: test timed out after 30 > milliseconds > at java.lang.Thread.sleep(Native Method) > at org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:148) > at > org.apache.hadoop.hbase.regionserver.TestFailedAppendAndSync.testLockupAroundBadAssignSync(TestFailedAppendAndSync.java:242) > Standard Output > 2016-02-22 18:14:09,154 INFO [main] hbase.ResourceChecker(148): before: > regionserver.TestFailedAppendAndSync#testLockupAroundBadAssignSync Thread=4, > OpenFileDescriptor=206, MaxFileDescriptor=6, SystemLoadAverage=1043, > ProcessCount=300, AvailableMemoryMB=1709 > 2016-02-22 18:14:09,809 DEBUG [main] hbase.HBaseTestingUtility(338): Setting > hbase.rootdir to > /home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533 > 2016-02-22 18:14:10,167 WARN [Time-limited test] util.NativeCodeLoader(62): > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2016-02-22 18:14:10,452 INFO [Time-limited test] wal.FSHLog(527): WAL > configuration: blocksize=32 MB, rollsize=30.40 MB, prefix=wal, suffix=, > logDir=/home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533/TestHRegiontestLockupAroundBadAssignSync/testLockupAroundBadAssignSync, > > archiveDir=/home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533/TestHRegiontestLockupAroundBadAssignSync/oldWALs > 2016-02-22 18:14:10,534 INFO [Time-limited test] wal.FSHLog(874): New WAL > /home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533/TestHRegiontestLockupAroundBadAssignSync/testLockupAroundBadAssignSync/wal.1456164850475 > 2016-02-22 18:14:10,673 INFO [Time-limited test] regionserver.HRegion(6118): > creating HRegion testLockupAroundBadAssignSync HTD == > 'testLockupAroundBadAssignSync', {TABLE_ATTRIBUTES => {DURABILITY => > 'SYNC_WAL', READONLY => 'false'}, {NAME => 'MyCF', BLOOMFILTER => 'ROW', > VERSIONS => '2147483647', IN_MEMORY => 'false', KEEP_DELETED_CELLS => > 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => > 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', > REPLICATION_SCOPE => '0'} RootDir = > /home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533 > Table name == testLockupAroundBadAssignSync > 2016-02-22 18:14:10,950 DEBUG [Time-limited test] regionserver.HRegion(718): > Instantiated > testLockupAroundBadAssignSync,,1456164850613.7eee51a40e497870a49a2965af62a054. > 2016-02-22 18:14:11,253 INFO > [StoreOpener-7eee51a40e497870a49a2965af62a054-1] hfile.CacheConfig(292): > CacheConfig:disabled > 2016-02-22 18:14:11,265 INFO > [StoreOpener-7eee51a40e497870a49a2965af62a054-1] > compactions.CompactionConfiguration(107): size [134217728, > 9223372036854775807, 9223372036854775807); files [3, 10); ratio 1.20; > off-peak ratio 5.00; throttle point 2684354560; major period 60480, > major jitter 0.50, min locality to compact 0.00 > 2016-02-22 18:14:11,269 DEBUG > [StoreOpener-7eee51a40e497870a49a2965af62a054-1] > regionserver.HRegionFileSystem(202): No StoreFiles for: > /home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533/data/default/testLockupAroundBadAssignSync/7eee51a40e497870a49a2965af62a054/MyCF > 2016-02-22 18:14:11,303 DEBUG [Time-limited test] regionserver.HRegion(3838): > Found 0 recovered edits file(s) under > /home/jenkins/jenkins-slave/workspace/HBase-Trunk_matrix/jdk/latest1.8/label/yahoo-not-h2/hbase-server/target/test-data/892bb23a-6932-4631-b4e0-a6384423d533/data/default/testLockupAroundBadAssignSync/7eee51a40e497870a49a2965af62a054 > 2016-02-22 18:14:1
[jira] [Resolved] (HBASE-15024) TestChoreService is flakey
[ https://issues.apache.org/jira/browse/HBASE-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-15024. - Resolution: Cannot Reproduce TestChoreService is not flaky currently. > TestChoreService is flakey > -- > > Key: HBASE-15024 > URL: https://issues.apache.org/jira/browse/HBASE-15024 > Project: HBase > Issue Type: Sub-task > Components: flakey, test >Reporter: stack >Priority: Critical > > https://builds.apache.org/job/HBase-Trunk_matrix/jdk=latest1.8,label=Hadoop/lastCompletedBuild/testReport/org.apache.hadoop.hbase/TestChoreService/testShutdownRejectsNewSchedules/history/ > Fails a bunch lately -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-15002) TestRSKilledWhenInitializing is flakey
[ https://issues.apache.org/jira/browse/HBASE-15002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-15002. - Resolution: Duplicate TestRSKilledWhenInitializing is currently ignored, see HBASE-19515. > TestRSKilledWhenInitializing is flakey > -- > > Key: HBASE-15002 > URL: https://issues.apache.org/jira/browse/HBASE-15002 > Project: HBase > Issue Type: Sub-task > Components: flakey, test >Reporter: stack >Priority: Major > > https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.2/449/jdk=latest1.8,label=Hadoop/testReport/junit/org.apache.hadoop.hbase.regionserver/TestRSKilledWhenInitializing/testRSTermnationAfterRegisteringToMasterBeforeCreatingEphemeralNod/history/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20562) [UMBRELLA] Use parameterized logging
Balazs Meszaros created HBASE-20562: --- Summary: [UMBRELLA] Use parameterized logging Key: HBASE-20562 URL: https://issues.apache.org/jira/browse/HBASE-20562 Project: HBase Issue Type: Umbrella Affects Versions: 3.0.0 Reporter: Balazs Meszaros We should use parameterized log message feature of slf4j/logback. (Use {} instead of string concatenation.) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20563) Use parameterized logging in hbase-common
Balazs Meszaros created HBASE-20563: --- Summary: Use parameterized logging in hbase-common Key: HBASE-20563 URL: https://issues.apache.org/jira/browse/HBASE-20563 Project: HBase Issue Type: Sub-task Affects Versions: 3.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20571) JMXJsonServlet generates invalid JSON if it has NaN in metrics
Balazs Meszaros created HBASE-20571: --- Summary: JMXJsonServlet generates invalid JSON if it has NaN in metrics Key: HBASE-20571 URL: https://issues.apache.org/jira/browse/HBASE-20571 Project: HBase Issue Type: Bug Components: UI Reporter: Balazs Meszaros Assignee: Balazs Meszaros {{/jmx}} servlet responses invalid JSON, if some metrics are NaN: {code} "l1CacheHitCount" : 0, "l1CacheMissCount" : 0, "l1CacheHitRatio" : NaN, "l1CacheMissRatio" : NaN, "l2CacheHitCount" : 0, "l2CacheMissCount" : 0, "l2CacheHitRatio" : 0.0, "l2CacheMissRatio" : 0.0, {code} NaN is an invalid character sequence in JSON. We should not response NaN in metrics. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20656) Validate pre-2.0 coprocessors against HBase 2.0+
Balazs Meszaros created HBASE-20656: --- Summary: Validate pre-2.0 coprocessors against HBase 2.0+ Key: HBASE-20656 URL: https://issues.apache.org/jira/browse/HBASE-20656 Project: HBase Issue Type: New Feature Components: tooling Affects Versions: 3.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros We have co-processors for a while, but the API has been changed recently. We should give some tooling for our users to determine if they can use the previous co-processors safely or not. The tool should: - try to load the co-processors on our current classpath for ensuring class references are on our classpath, - should check for previously removed co-processor methods. In this version we check only method signatures. Code references should be checked in further versions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20833) Modify pre-upgrade coprocessor validator to support table level coprocessors
Balazs Meszaros created HBASE-20833: --- Summary: Modify pre-upgrade coprocessor validator to support table level coprocessors Key: HBASE-20833 URL: https://issues.apache.org/jira/browse/HBASE-20833 Project: HBase Issue Type: New Feature Reporter: Balazs Meszaros Assignee: Balazs Meszaros -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21231) Add documentation for MajorCompactor
Balazs Meszaros created HBASE-21231: --- Summary: Add documentation for MajorCompactor Key: HBASE-21231 URL: https://issues.apache.org/jira/browse/HBASE-21231 Project: HBase Issue Type: Task Components: documentation Affects Versions: 3.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros HBASE-19528 added a new MajorCompactor tool, but it lacks of documentation. Let's document it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21841) Allow inserting null values throw DataSource API
Balazs Meszaros created HBASE-21841: --- Summary: Allow inserting null values throw DataSource API Key: HBASE-21841 URL: https://issues.apache.org/jira/browse/HBASE-21841 Project: HBase Issue Type: Improvement Components: spark Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros If I try to insert a DataFrame with null values, spark throws an exception: {noformat} Caused by: java.lang.Exception: unsupported data type StringType at org.apache.hadoop.hbase.spark.datasources.Utils$.toBytes(Utils.scala:88) {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21842) Remove ${revision} from parent POMs
Balazs Meszaros created HBASE-21842: --- Summary: Remove ${revision} from parent POMs Key: HBASE-21842 URL: https://issues.apache.org/jira/browse/HBASE-21842 Project: HBase Issue Type: Improvement Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros We are referencing parent POMs with {noformat} org.apache.hbase.connectors hbase-connectors ${revision} ../ {noformat} It is a wrong approach because ${revision} is not defined in the child projects, so invoking maven in a sub-directory will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22073) /rits.jsp throws an exception if no procedure
Balazs Meszaros created HBASE-22073: --- Summary: /rits.jsp throws an exception if no procedure Key: HBASE-22073 URL: https://issues.apache.org/jira/browse/HBASE-22073 Project: HBase Issue Type: Bug Components: UI Affects Versions: 2.1.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros I got the following exception in our test environment: {noformat} java.lang.NullPointerException at org.apache.hadoop.hbase.generated.master.rits_jsp._jspService(rits_jsp.java:101) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) ... {noformat} Because {{regionStateNode.getProcedure()}} returns {{null}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22210) Fix hbase-connectors-assembly to include every jar
Balazs Meszaros created HBASE-22210: --- Summary: Fix hbase-connectors-assembly to include every jar Key: HBASE-22210 URL: https://issues.apache.org/jira/browse/HBASE-22210 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Fix For: connector-1.0.0 After compiling hbase-connectors, {{bin/hbase-connectors kafkaproxy}} throws the following exception: {noformat} Error: Could not find or load main class org.apache.hadoop.hbase.kafka.KafkaProxy {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22220) Release hbase-connectors-1.0.0
Balazs Meszaros created HBASE-0: --- Summary: Release hbase-connectors-1.0.0 Key: HBASE-0 URL: https://issues.apache.org/jira/browse/HBASE-0 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22221) Extend kafka-proxy documentation with required hbase settings
Balazs Meszaros created HBASE-1: --- Summary: Extend kafka-proxy documentation with required hbase settings Key: HBASE-1 URL: https://issues.apache.org/jira/browse/HBASE-1 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros kafka/README.md lacks of HBase server side configuration. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22245) Please add my public key to committer keys
Balazs Meszaros created HBASE-22245: --- Summary: Please add my public key to committer keys Key: HBASE-22245 URL: https://issues.apache.org/jira/browse/HBASE-22245 Project: HBase Issue Type: Task Reporter: Balazs Meszaros Assignee: Sean Busbey Attachments: meszibalu.asc -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22257) Remove json4s and jackson dependency from hbase spark connector
Balazs Meszaros created HBASE-22257: --- Summary: Remove json4s and jackson dependency from hbase spark connector Key: HBASE-22257 URL: https://issues.apache.org/jira/browse/HBASE-22257 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Fix For: connector-1.0.0 {{spark/HBaseTableCatalog}} is the only place where we are parsing json. We depend on a lot of jars doing a simple task which is included in scala. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22266) Add yetus personality to connectors to avoid scaladoc issues
Balazs Meszaros created HBASE-22266: --- Summary: Add yetus personality to connectors to avoid scaladoc issues Key: HBASE-22266 URL: https://issues.apache.org/jira/browse/HBASE-22266 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Fix For: connector-1.0.0 yetus scaladoc plugin failed, because it tries to run it on every module: {noformat} [ERROR] No plugin found for prefix 'scala' in the current project and in the plugin groups [org.apache.maven.plugins, org.codehaus.mojo] available from the repositories [local (/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-CONNECTORS-Build/yetus-m2/hbase-connectors-master-patch-1), central (https://repo.maven.apache.org/maven2)] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/NoPluginFoundForPrefixException {noformat} We should enable scaladoc plugin only on hbase-spark project. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22257) Remove json4s and jackson dependency from hbase spark connector
[ https://issues.apache.org/jira/browse/HBASE-22257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22257. - Resolution: Fixed > Remove json4s and jackson dependency from hbase spark connector > --- > > Key: HBASE-22257 > URL: https://issues.apache.org/jira/browse/HBASE-22257 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: connector-1.0.0 > > > {{spark/HBaseTableCatalog}} is the only place where we are parsing json. We > depend on a lot of jars doing a simple task which is included in scala. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-14789) Enhance the current spark-hbase connector
[ https://issues.apache.org/jira/browse/HBASE-14789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-14789. - Resolution: Fixed > Enhance the current spark-hbase connector > - > > Key: HBASE-14789 > URL: https://issues.apache.org/jira/browse/HBASE-14789 > Project: HBase > Issue Type: Improvement > Components: hbase-connectors, spark >Reporter: Zhan Zhang >Assignee: Zhan Zhang >Priority: Major > Fix For: 3.0.0, connector-1.0.0 > > Attachments: shc.pdf > > > This JIRA is to optimize the RDD construction in the current connector > implementation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22220) Release hbase-connectors-1.0.0
[ https://issues.apache.org/jira/browse/HBASE-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-0. - Resolution: Fixed Fix Version/s: connector-1.0.0 > Release hbase-connectors-1.0.0 > -- > > Key: HBASE-0 > URL: https://issues.apache.org/jira/browse/HBASE-0 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: connector-1.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22593) Add local Jenv file to gitignore
[ https://issues.apache.org/jira/browse/HBASE-22593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22593. - Resolution: Fixed Fix Version/s: 1.4.11 1.3.6 2.1.6 2.2.1 2.0.6 1.5.0 3.0.0 > Add local Jenv file to gitignore > > > Key: HBASE-22593 > URL: https://issues.apache.org/jira/browse/HBASE-22593 > Project: HBase > Issue Type: Improvement >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0, 1.5.0, 2.0.6, 2.2.1, 2.1.6, 1.3.6, 1.4.11 > > > When using Jenv to manage multiple Java versions with a local version, a > {{.java-version}} file gets created. This file should be on the list of files > Git ignores. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22599) Let hbase-connectors compile against HBase 2.2.0
Balazs Meszaros created HBASE-22599: --- Summary: Let hbase-connectors compile against HBase 2.2.0 Key: HBASE-22599 URL: https://issues.apache.org/jira/browse/HBASE-22599 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile (default-compile) on project hbase-kafka-proxy: Compilation failure [ERROR] /Users/balazs.meszaros/workspaces/upstream/hbase/hbase-connectors/kafka/hbase-kafka-proxy/src/main/java/org/apache/hadoop/hbase/kafka/KafkaBridgeConnection.java:[53,8] org.apache.hadoop.hbase.kafka.KafkaBridgeConnection is not abstract and does not override abstract method clearRegionLocationCache() in org.apache.hadoop.hbase.client.Connection {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22636) hbase spark module README is in txt format.
[ https://issues.apache.org/jira/browse/HBASE-22636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22636. - Resolution: Fixed Fix Version/s: connector-1.0.1 > hbase spark module README is in txt format. > --- > > Key: HBASE-22636 > URL: https://issues.apache.org/jira/browse/HBASE-22636 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Affects Versions: 1.0.0 >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Trivial > Fix For: connector-1.0.1 > > > the protobuf README is in txt format. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22698) [hbase-connectors] Add license header to README.md
Balazs Meszaros created HBASE-22698: --- Summary: [hbase-connectors] Add license header to README.md Key: HBASE-22698 URL: https://issues.apache.org/jira/browse/HBASE-22698 Project: HBase Issue Type: Task Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros Build https://builds.apache.org/job/PreCommit-HBASE-CONNECTORS-Build/55/ failed because {{spark/hbase-spark/README.md}} does not have license header. {noformat} Lines that start with ? in the ASF License report indicate files that do not have an Apache license header: !? /testptch/hbase-connectors/spark/hbase-spark/README.md {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22711) Spark connector doesn't use the given mapping when inserting data
Balazs Meszaros created HBASE-22711: --- Summary: Spark connector doesn't use the given mapping when inserting data Key: HBASE-22711 URL: https://issues.apache.org/jira/browse/HBASE-22711 Project: HBase Issue Type: Bug Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros In some cases a Spark DataFrames cannot be read back with the same mapping as they were written. For example: {code:scala} val sql = spark.sqlContext val persons = """[ |{"name": "alice", "age": 20, "height": 5, "email": "al...@alice.com"}, |{"name": "bob", "age": 23, "height": 6, "email": "b...@bob.com"}, |{"name": "carol", "age": 12, "email": "ca...@carol.com", "height": 4.11} |] """.stripMargin val df = spark.read.json(Seq(persons).toDS) df.write .format("org.apache.hadoop.hbase.spark") .option("hbase.columns.mapping", "name STRING :key, age SHORT p:age, email STRING c:email, height FLOAT p:height") .option("hbase.table", "person") .option("hbase.spark.use.hbasecontext", false) .save() {code} It cannot be read back with the same mapping: {code:scala} val df2 = sql.read .format("org.apache.hadoop.hbase.spark") .option("hbase.columns.mapping", "name STRING :key, age SHORT p:age, email STRING c:email, height FLOAT p:height") .option("hbase.table", "person") .option("hbase.spark.use.hbasecontext", false) .load() df2.createOrReplaceTempView("tableView") val results = sql.sql("SELECT * FROM tableView") results.show() {code} The results: {noformat} +---+-+-+---+ |age| name| height| email| +---+-+-+---+ | 0|alice| 2.3125|al...@alice.com| | 0| bob|2.375|b...@bob.com| | 0|carol|2.2568748|ca...@carol.com| +---+-+-+---+ {noformat} Spark stores integer values in long, floating point values in double so shorts become 8 bytes long, floats also become 8 bytes long in HBase: {noformat} shell> scan 'person' alicecolumn=p:age, timestamp=1563450714829, value=\x00\x00\x00\x00\x00\x00\x00\x14 alicecolumn=p:height, timestamp=1563450714829, value=@\x14\x00\x00\x00\x00\x00\x00 {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (HBASE-22599) Let hbase-connectors compile against HBase 2.2.0
[ https://issues.apache.org/jira/browse/HBASE-22599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22599. - Resolution: Fixed Fix Version/s: connector-1.0.1 > Let hbase-connectors compile against HBase 2.2.0 > > > Key: HBASE-22599 > URL: https://issues.apache.org/jira/browse/HBASE-22599 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: connector-1.0.1 > > > {noformat} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile > (default-compile) on project hbase-kafka-proxy: Compilation failure > [ERROR] > /Users/balazs.meszaros/workspaces/upstream/hbase/hbase-connectors/kafka/hbase-kafka-proxy/src/main/java/org/apache/hadoop/hbase/kafka/KafkaBridgeConnection.java:[53,8] > org.apache.hadoop.hbase.kafka.KafkaBridgeConnection is not abstract and does > not override abstract method clearRegionLocationCache() in > org.apache.hadoop.hbase.client.Connection > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (HBASE-22711) Spark connector doesn't use the given mapping when inserting data
[ https://issues.apache.org/jira/browse/HBASE-22711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22711. - Resolution: Fixed Fix Version/s: connector-1.0.1 > Spark connector doesn't use the given mapping when inserting data > - > > Key: HBASE-22711 > URL: https://issues.apache.org/jira/browse/HBASE-22711 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: connector-1.0.1 > > > In some cases a Spark DataFrames cannot be read back with the same mapping as > they were written. For example: > {code:scala} > val sql = spark.sqlContext > val persons = > """[ > |{"name": "alice", "age": 20, "height": 5, "email": "al...@alice.com"}, > |{"name": "bob", "age": 23, "height": 6, "email": "b...@bob.com"}, > |{"name": "carol", "age": 12, "email": "ca...@carol.com", "height": > 4.11} > |] > """.stripMargin > val df = spark.read.json(Seq(persons).toDS) > df.write > .format("org.apache.hadoop.hbase.spark") > .option("hbase.columns.mapping", "name STRING :key, age SHORT p:age, email > STRING c:email, height FLOAT p:height") > .option("hbase.table", "person") > .option("hbase.spark.use.hbasecontext", false) > .save() > {code} > It cannot be read back with the same mapping: > {code:scala} > val df2 = sql.read > .format("org.apache.hadoop.hbase.spark") > .option("hbase.columns.mapping", "name STRING :key, age SHORT p:age, email > STRING c:email, height FLOAT p:height") > .option("hbase.table", "person") > .option("hbase.spark.use.hbasecontext", false) > .load() > df2.createOrReplaceTempView("tableView") > val results = sql.sql("SELECT * FROM tableView") > results.show() > {code} > The results: > {noformat} > +---+-+-+---+ > |age| name| height| email| > +---+-+-+---+ > | 0|alice| 2.3125|al...@alice.com| > | 0| bob|2.375|b...@bob.com| > | 0|carol|2.2568748|ca...@carol.com| > +---+-+-+---+ > {noformat} > Spark stores integer values in long, floating point values in double so > shorts become 8 bytes long, floats also become 8 bytes long in HBase: > {noformat} > shell> scan 'person' > alicecolumn=p:age, timestamp=1563450714829, > value=\x00\x00\x00\x00\x00\x00\x00\x14 > alicecolumn=p:height, timestamp=1563450714829, > value=@\x14\x00\x00\x00\x00\x00\x00 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22800) Add mapreduce dependencies to hbase-shaded-testing-util
Balazs Meszaros created HBASE-22800: --- Summary: Add mapreduce dependencies to hbase-shaded-testing-util Key: HBASE-22800 URL: https://issues.apache.org/jira/browse/HBASE-22800 Project: HBase Issue Type: Improvement Affects Versions: 2.1.1 Reporter: Balazs Meszaros Assignee: Balazs Meszaros {{MiniMRCluster}} is missing from the generated {{hbase-shaded-testing-util}} artifact. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22817) Use hbase-shaded dependencies in hbase-spark
Balazs Meszaros created HBASE-22817: --- Summary: Use hbase-shaded dependencies in hbase-spark Key: HBASE-22817 URL: https://issues.apache.org/jira/browse/HBASE-22817 Project: HBase Issue Type: Improvement Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros We changed the behavior of {{hbase mapredcp}} which returns the shaded mapreduce. So if we run {code} spark-shell --driver-class-path `hbase mapredcp`:hbase-spark.jar {code} then it should work fine. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-23007) UnsatisfiedLinkError when using hbase-shaded packages under linux
Balazs Meszaros created HBASE-23007: --- Summary: UnsatisfiedLinkError when using hbase-shaded packages under linux Key: HBASE-23007 URL: https://issues.apache.org/jira/browse/HBASE-23007 Project: HBase Issue Type: Bug Components: shading Affects Versions: 3.0.0 Reporter: Balazs Meszaros Assignee: Balazs Meszaros If we use hbase-shaded-* packages under linux, we get the following exception: {noformat} 2019-08-26 16:36:10,413 ERROR [Time-limited test] regionserver.HRegionServer (HRegionServer.java:(662)) - Failed construction RegionServer java.lang.UnsatisfiedLinkError: failed to load the required native library at org.apache.hbase.thirdparty.io.netty.channel.epoll.Epoll.ensureAvailability(Epoll.java:79) … Caused by: java.lang.UnsatisfiedLinkError: could not load a native library: org_apache_hbase_thirdparty_org.apache.hadoop.hbase.shaded.netty_transport_native_epoll_x86_64 at org.apache.hbase.thirdparty.io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:224) {noformat} {{liborg_apache_hbase_thirdparty_netty_transport_native_epoll_x86_64.so}} is in the shaded jar files, but {{org_apache_hbase_thirdparty_org.apache.hadoop.hbase.shaded.netty_transport_native_epoll_x86_64.so}} is not. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (HBASE-23059) Run mvn install for root in precommit
[ https://issues.apache.org/jira/browse/HBASE-23059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23059. - Resolution: Fixed Thanks [~psomogyi] ! > Run mvn install for root in precommit > - > > Key: HBASE-23059 > URL: https://issues.apache.org/jira/browse/HBASE-23059 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: connector-1.0.1 > > > When mvn install is required in hbase-connectors the build fails to find the > parent. Need to run mvn install in root in this case. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23032) Upgrade to Curator 4.2.0
[ https://issues.apache.org/jira/browse/HBASE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23032. - Resolution: Fixed > Upgrade to Curator 4.2.0 > > > Key: HBASE-23032 > URL: https://issues.apache.org/jira/browse/HBASE-23032 > Project: HBase > Issue Type: Improvement >Reporter: Tamas Penzes >Assignee: Balazs Meszaros >Priority: Major > Fix For: 3.0.0, 2.3.0, connector-1.0.1, > hbase-filesystem-1.0.0-alpha2 > > > Curator 4.0 is quite old, it's time to jump to 4.2.0. > We should do it in hbase-connectors and hbase-filesystem too. > [http://curator.apache.org/zk-compatibility.html] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-22982) Send SIGSTOP to hang or SIGCONT to resume rs and add graceful rolling restart
[ https://issues.apache.org/jira/browse/HBASE-22982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22982. - Resolution: Fixed Thanks for the contribution. > Send SIGSTOP to hang or SIGCONT to resume rs and add graceful rolling restart > - > > Key: HBASE-22982 > URL: https://issues.apache.org/jira/browse/HBASE-22982 > Project: HBase > Issue Type: Sub-task > Components: integration tests >Affects Versions: 3.0.0 >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > > * Add a Chaos Monkey action that uses SIGSTOP and SIGCONT to hang and resume > a ratio of region servers. > * Add a Chaos Monkey action to simulate a rolling restart including > graceful_stop like functionality that unloads the regions from the server > before a restart and then places it under load again afterwards. > * Add these actions to the relevant monkeys -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23085) Network and Data related Actions
[ https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23085. - Resolution: Fixed > Network and Data related Actions > > > Key: HBASE-23085 > URL: https://issues.apache.org/jira/browse/HBASE-23085 > Project: HBase > Issue Type: Sub-task > Components: integration tests >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0 > > > Add additional actions to: > * manipulate network packages with tc (reorder, loose,...) > * add CPU load > * fill the disk > * corrupt or delete regionserver data files > Create new monkey factories for the new actions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23085) Network and Data related Actions
[ https://issues.apache.org/jira/browse/HBASE-23085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23085. - Resolution: Fixed Backported to branch-2 and branch-2.2. > Network and Data related Actions > > > Key: HBASE-23085 > URL: https://issues.apache.org/jira/browse/HBASE-23085 > Project: HBase > Issue Type: Sub-task > Components: integration tests >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0, 2.3.0, 2.2.3 > > > Add additional actions to: > * manipulate network packages with tc (reorder, loose,...) > * add CPU load > * fill the disk > * corrupt or delete regionserver data files > Create new monkey factories for the new actions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23348) Spark's createTable method throws an exception while the table is being split
Balazs Meszaros created HBASE-23348: --- Summary: Spark's createTable method throws an exception while the table is being split Key: HBASE-23348 URL: https://issues.apache.org/jira/browse/HBASE-23348 Project: HBase Issue Type: Bug Components: hbase-connectors Affects Versions: connector-1.0.0 Reporter: Balazs Meszaros {{HBaseRelation.createTable}} checks table existence with {{HBaseAdmin.isTableAvailable}} method [here|https://github.com/apache/hbase-connectors/blob/master/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/DefaultSource.scala#L167]. This is unfortunate, because it can return {{false}} while splitting, so {{createTable}} will fail. It should use {{tableExists}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23348) Spark's createTable method throws an exception while the table is being split
[ https://issues.apache.org/jira/browse/HBASE-23348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23348. - Fix Version/s: connector-1.0.1 Resolution: Fixed Thanks [~rabikumar.kc]! > Spark's createTable method throws an exception while the table is being split > - > > Key: HBASE-23348 > URL: https://issues.apache.org/jira/browse/HBASE-23348 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Affects Versions: connector-1.0.0 >Reporter: Balazs Meszaros >Assignee: Rabi Kumar K C >Priority: Major > Labels: beginner > Fix For: connector-1.0.1 > > > {{HBaseRelation.createTable}} checks table existence with > {{HBaseAdmin.isTableAvailable}} method > [here|https://github.com/apache/hbase-connectors/blob/master/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/DefaultSource.scala#L167]. > This is unfortunate, because it can return {{false}} while splitting, so > {{createTable}} will fail. It should use {{tableExists}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23351) hbase-connectors Test fail when stating HBase mini cluster
[ https://issues.apache.org/jira/browse/HBASE-23351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23351. - Fix Version/s: connector-1.0.1 Resolution: Fixed > hbase-connectors Test fail when stating HBase mini cluster > -- > > Key: HBASE-23351 > URL: https://issues.apache.org/jira/browse/HBASE-23351 > Project: HBase > Issue Type: Bug >Affects Versions: connector-1.0.0 >Reporter: István Adamcsik >Assignee: István Adamcsik >Priority: Major > Fix For: connector-1.0.1 > > > Due to a shading issue in hbase-shaded-testing-util, whereby jackson is > omitted from the fat jar, the hbase-connectors tests fail to start mini > clusters which fail with the below error. > {code:java} > *** RUN ABORTED *** java.lang.NoClassDefFoundError: Could not initialize > class org.apache.hadoop.hdfs.web.WebHdfsFileSystem at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.initWebHdfs(NameNodeHttpServer.java:78) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:166) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:842) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:693) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:906) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1162) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1037) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:830{code} > This issue is fixed in hbase 2.2.2 with HBASE-23007. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23790) Bump netty version to 4.1.45.Final in hbase-thirdparty
[ https://issues.apache.org/jira/browse/HBASE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23790. - Resolution: Fixed > Bump netty version to 4.1.45.Final in hbase-thirdparty > -- > > Key: HBASE-23790 > URL: https://issues.apache.org/jira/browse/HBASE-23790 > Project: HBase > Issue Type: Improvement > Components: hbase-thirdparty >Affects Versions: thirdparty-3.2.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > Fix For: thirdparty-3.3.0 > > > We do have a new netty version 4.1.45.Final which we could update to. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24088) Solve the ambiguous reference for scala 2.12
[ https://issues.apache.org/jira/browse/HBASE-24088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-24088. - Fix Version/s: connector-1.0.1 Resolution: Fixed > Solve the ambiguous reference for scala 2.12 > > > Key: HBASE-24088 > URL: https://issues.apache.org/jira/browse/HBASE-24088 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Reporter: Liebing Yu >Priority: Minor > Fix For: connector-1.0.1 > > > When the {{hbase-spark}} module is compiled under the scala 2.12 environment, > the following error appears in line 216 of the > {{org.apache.hadoop.hbase.spark.datasources.HBaseTableScanRDD}} class: > {code:java} > [ERROR] [Error] > /path/hbase-connectors/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/datasources/HBaseTableScanRDD.scala:216: > ambiguous reference to overloaded definition, both method > addTaskCompletionListener in class TaskContext of type [U](f: > org.apache.spark.TaskContext => U)org.apache.spark.TaskContext and method > addTaskCompletionListener in class TaskContext of type (listener: > org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext > match argument types (org.apache.spark.TaskContext => Unit){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24276) hbase spark connector doesn't support writing to table not in default namespace
[ https://issues.apache.org/jira/browse/HBASE-24276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-24276. - Fix Version/s: connector-1.0.1 Resolution: Fixed > hbase spark connector doesn't support writing to table not in default > namespace > --- > > Key: HBASE-24276 > URL: https://issues.apache.org/jira/browse/HBASE-24276 > Project: HBase > Issue Type: Bug > Components: hbase-connectors, spark >Affects Versions: connector-1.0.0 > Environment: - HBase 2.2.4 > - Hadoop 2.10.0 > - Spark 2.4.5 >Reporter: Naitree Zhu >Priority: Major > Fix For: connector-1.0.1 > > > Defining the following table catalog: > {code:java} > val catalog = """{ > |"table": {"namespace": "ns1", "name": "test1"}, > |"rowkey": "id", > |"columns": { > |"id": {"cf": "rowkey", "col": "id", "type": "string"}, > |"x": {"cf": "d", "col": "xxx", "type": "int"} > |} > |}""".stripMargin > {code} > Try to write some test data to {{ns1:test1}} table using spark: > {code:java} > val df = Seq(("abc", 1), ("def", 2)).toDF("id", "x") > df.write.options(Map(HBaseTableCatalog.tableCatalog -> catalog, > "hbase.spark.use.hbasecontext" -> "false", HBaseTableCatalog.newTable-> > "5")).format("org.apache.hadoop.hbase.spark").save() > {code} > After executing the code above, I found out that the test data was written to > {{default:test1}}, rather than {{ns1:test1}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25146) Add extra logging at info level to HFileCorruptionChecker in order to report progress
[ https://issues.apache.org/jira/browse/HBASE-25146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25146. - Fix Version/s: 2.2.7 2.4.0 2.3.3 3.0.0-alpha-1 Resolution: Fixed > Add extra logging at info level to HFileCorruptionChecker in order to report > progress > - > > Key: HBASE-25146 > URL: https://issues.apache.org/jira/browse/HBASE-25146 > Project: HBase > Issue Type: Improvement > Components: hbck, hbck2 >Reporter: Andor Molnar >Assignee: Andor Molnar >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Currently there's no progress reporting in HFileCorruptionChecker: neither in > the logs nor in stdout. It only creates a report about the entire operation > at the end of the process and emits some warning messages is corruption is > found. > Adding some logging about the progress would be beneficial for long running > checks indicating that the process is running healthy. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25267) Add SSL keystore type and truststore related configs for HBase RESTServer
[ https://issues.apache.org/jira/browse/HBASE-25267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25267. - Fix Version/s: 2.3.4 2.2.7 2.4.0 3.0.0-alpha-1 Resolution: Fixed > Add SSL keystore type and truststore related configs for HBase RESTServer > - > > Key: HBASE-25267 > URL: https://issues.apache.org/jira/browse/HBASE-25267 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Mate Szalay-Beko >Assignee: Mate Szalay-Beko >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.7, 2.3.4 > > > The RESTServer currently relies on the following parameters to configure SSL > on the REST API: > * {{hbase.rest.ssl.enabled}} > * {{hbase.rest.ssl.keystore.store}} > * {{hbase.rest.ssl.keystore.password}} > * {{hbase.rest.ssl.keystore.keypassword}} > * {{hbase.rest.ssl.exclude.cipher.suites}} > * {{hbase.rest.ssl.include.cipher.suites}} > * {{hbase.rest.ssl.exclude.protocols}} > * {{hbase.rest.ssl.include.protocols}} > In this patch I want to introduce the following new parameters: > * {{hbase.rest.ssl.keystore.type}} > * {{hbase.rest.ssl.truststore.store}} > * {{hbase.rest.ssl.truststore.password}} > * {{hbase.rest.ssl.truststore.type}} > If any of the new the parameter is not provided, then we should fall-back to > the current behaviour (e.g. assuming JKS keystore/truststore types, or no > passwords, or no custom trust store file). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25307) ThreadLocal pooling leads to NullPointerException
Balazs Meszaros created HBASE-25307: --- Summary: ThreadLocal pooling leads to NullPointerException Key: HBASE-25307 URL: https://issues.apache.org/jira/browse/HBASE-25307 Project: HBase Issue Type: Bug Components: Client Affects Versions: 3.0.0-alpha-1 Reporter: Balazs Meszaros Assignee: Balazs Meszaros We got NPE after setting {{hbase.client.ipc.pool.type}} to {{thread-local}}: {noformat} 20/11/18 01:53:04 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.NullPointerException java.lang.NullPointerException at org.apache.hadoop.hbase.ipc.AbstractRpcClient.close(AbstractRpcClient.java:496) at org.apache.hadoop.hbase.client.ConnectionImplementation.close(ConnectionImplementation.java:1944) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.close(TableInputFormatBase.java:660) {noformat} The root cause of the issue is probably at {{PoolMap.ThreadLocalPool.values()}}: {code:java} public Collection values() { List values = new ArrayList<>(); values.add(get()); return values; } {code} It adds {{null}} into the collection if the current thread does not have any resources which leads to NPE later. I traced the usages of values() and it should return every resource, not just that one which is attached to the caller thread. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25236) [hbase-connectors] Run package phase on spark modules
[ https://issues.apache.org/jira/browse/HBASE-25236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25236. - Fix Version/s: connector-1.0.1 Resolution: Fixed > [hbase-connectors] Run package phase on spark modules > - > > Key: HBASE-25236 > URL: https://issues.apache.org/jira/browse/HBASE-25236 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Affects Versions: connector-1.0.1 >Reporter: Peter Somogyi >Assignee: Peter Somogyi >Priority: Major > Fix For: connector-1.0.1 > > > The precommit job fails if a change is made in the spark module because the > protobuf generator plugin runs in package phase. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23592) Refactor tests in hbase-kafka-proxy in hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-23592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-23592. - Fix Version/s: connector-1.0.1 Resolution: Fixed > Refactor tests in hbase-kafka-proxy in hbase-connectors > --- > > Key: HBASE-23592 > URL: https://issues.apache.org/jira/browse/HBASE-23592 > Project: HBase > Issue Type: Improvement >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: connector-1.0.1 > > > The tests in {{hbase-kafka-proxy}} within {{hbase-connectors}} should be > refactored to > * move the usage of the character set to {{StandardCharsets}} > * remove printing the stacktrace > * simplification of the asserts -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24230) Support user-defined version timestamp when bulk load data
[ https://issues.apache.org/jira/browse/HBASE-24230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-24230. - Fix Version/s: connector-1.0.1 Resolution: Fixed > Support user-defined version timestamp when bulk load data > -- > > Key: HBASE-24230 > URL: https://issues.apache.org/jira/browse/HBASE-24230 > Project: HBase > Issue Type: Improvement > Components: hbase-connectors >Affects Versions: 1.0.0 >Reporter: Xiao Zhang >Assignee: Xiao Zhang >Priority: Minor > Fix For: connector-1.0.1 > > > In hbase-connectors-1.0.0, loading data in bulk, only the current system time > can be used as the KeyValue version timestamp. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25551) Hbase connector with apache spark
[ https://issues.apache.org/jira/browse/HBASE-25551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25551. - Resolution: Invalid Timed out, closing it. > Hbase connector with apache spark > - > > Key: HBASE-25551 > URL: https://issues.apache.org/jira/browse/HBASE-25551 > Project: HBase > Issue Type: Wish >Reporter: Shubham swaraj >Priority: Major > > Hbase connector with apache -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25586) Fix HBASE-22492 on branch-2
Balazs Meszaros created HBASE-25586: --- Summary: Fix HBASE-22492 on branch-2 Key: HBASE-25586 URL: https://issues.apache.org/jira/browse/HBASE-25586 Project: HBase Issue Type: Bug Components: rpc Affects Versions: 2.2.6 Reporter: Balazs Meszaros Assignee: Balazs Meszaros The issue is also exists on branch-2. {noformat} 17:27:41.556 [pool-1-thread-8] WARN org.apache.hadoop.hbase.client.ScannerCallable - Ignore, probably already closed. Current scan: {"startRow":"1999","stopRow":"","batch":20,"cacheBlocks":true,"totalColumns":0,"maxResultSize":"2097152","families":{},"caching":2147483647,"maxVersions":1,"timeRange":["0","9223372036854775807"]} on table: cluster_test javax.security.sasl.SaslException: Call to hbase-secure6-1.hbase-secure6.root.hwx.site/172.27.162.2:22101 failed on local exception: javax.security.sasl.SaslException: Gap token at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:224) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:383) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:89) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:414) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.cleanupCalls(NettyRpcDuplexHandler.java:203) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.exceptionCaught(NettyRpcDuplexHandler.java:220) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273) at org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:143) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:381) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
[jira] [Resolved] (HBASE-25584) FREE ROBUX GENERATOR - ROBLOX FREE ROBUX GENERATOR NO HUMAN VERIFICATION 2021
[ https://issues.apache.org/jira/browse/HBASE-25584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25584. - Resolution: Invalid > FREE ROBUX GENERATOR - ROBLOX FREE ROBUX GENERATOR NO HUMAN VERIFICATION 2021 > - > > Key: HBASE-25584 > URL: https://issues.apache.org/jira/browse/HBASE-25584 > Project: HBase > Issue Type: Bug > Components: API >Affects Versions: 1.0.4 >Reporter: bhangad singh >Priority: Major > Fix For: 1.0.4 > > Original Estimate: 612h > Remaining Estimate: 612h > > h1. Free Robux Generator 2021 How To Get Free Roblox Promo Codes No Human > Verification/Survey In 2020. > > [== CLICK HERE TO GET FREE ROBUX ==|https://www.reapinfo.org/hackroblox] > [== FREE ROBLOX ROBUX GENERATOR HERE ==|https://www.reapinfo.org/hackroblox] > Our roblox free robux online generator tool will get you absolutely free > Roblox Robux, without download any software, app or extension. Prepared and > Customize your hero with the new latest roblox skins with our free unlimited > robux 2021. Enter the Roblox Season 9 to get more out of your Roblox > experience. Play to win and rank up for in-game items and rewards(skins & > dancing style). You can use our online robux generator to unlock premium > Robux. Our online Roblox Robux Generator will get you immediate being able to > unlock instantly into your roblox account up to a hundred achievements. > h2. HOW TO GET FREE ROBUX > Get free robux roblox is 100% legal and safe, you don't need to worry about > your account is banned because our tools is not required you login detail > such as gmail or password. What you have to do is just putting your username, > select amount of robux you prefer and complete human verification, finally > robux that you generate will sending instant into your roblox account without > login detail need. > h2. ROBLOX FREE ROBUX GENERATOR NO HUMAN VERIFICATION > Free Roblox Robux Generator slow hardware update cycle is conducive to game > development. Nowadays, the development of free robux codes generator often > takes three or five years. If the host hardware is updated frequently in the > development cycle, it will inevitably bring a lot of troubles to the > development work. But if the hardware can be kept relatively uniform, > manufacturers can slowly tap the hardware potential, and fully optimize and > polish the game. That's why at the end of the robux generator 2020's life > cycle, even if the hardware function has been stretched out, it can also > present working free robux codes easy and without human verification! > FREE ROBUX HACK NO HUMAN VERIFICATION -FREE ROBUX GENERATORFREE ROBUX > GENERATOR NO HUMAN VERIFICATION -FREE ROBUX GENERATOR 2021 FREE ROBUX > GENERATOR NO HUMAN VERIFICATION -FREEROBUX GENERATOR 2021oblox Hack, Free > Robux Generator, Rux Generator Free, Free Robux Codes -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25585) FREE V BUCKS GENERATOR - FORTNITE FREE V BUCKS GENERATOR NO HUMAN VERIFICATION 2021
[ https://issues.apache.org/jira/browse/HBASE-25585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25585. - Resolution: Invalid > FREE V BUCKS GENERATOR - FORTNITE FREE V BUCKS GENERATOR NO HUMAN > VERIFICATION 2021 > --- > > Key: HBASE-25585 > URL: https://issues.apache.org/jira/browse/HBASE-25585 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.4.0 >Reporter: loda singh >Priority: Major > Fix For: 2.4.0 > > > h1. FREE V BUCKS GENERATOR - FORTNITE FREE V BUCKS GENERATOR NO HUMAN > VERIFICATION 2021 > [== CLICK HERE TO GET FREE V BUCKS ==|https://www.reapinfo.org/vbucksgen] > [== FREE FORTNITE V BUCKS GENERATOR HERE > ==|https://www.reapinfo.org/vbucksgen] > Our FORTNITE free V BUCKS online generator tool will get you absolutely free > FORTNITE V BUCKS, without download any software, app or extension. Prepared > and Customize your hero with the new latest FORTNITE skins with our free > unlimited V BUCKS 2021. Enter the FORTNITE Season 9 to get more out of your > FORTNITE experience. Play to win and rank up for in-game items and > rewards(skins & dancing style). You can use our online V BUCKS generator to > unlock premium V BUCKS. Our online FORTNITE V BUCKS Generator will get you > immediate being able to unlock instantly into your FORTNITE account up to a > hundred achievements. > h2. HOW TO GET FREE V BUCKS > Get free V BUCKS FORTNITE is 100% legal and safe, you don't need to worry > about your account is banned because our tools is not required you login > detail such as gmail or password. What you have to do is just putting your > username, select amount of V BUCKS you prefer and complete human > verification, finally V BUCKS that you generate will sending instant into > your FORTNITE account without login detail need. > h2. FORTNITE FREE V BUCKS GENERATOR NO HUMAN VERIFICATION > Free FORTNITE V BUCKS Generator slow hardware update cycle is conducive to > game development. Nowadays, the development of free V BUCKS codes generator > often takes three or five years. If the host hardware is updated frequently > in the development cycle, it will inevitably bring a lot of troubles to the > development work. But if the hardware can be kept relatively uniform, > manufacturers can slowly tap the hardware potential, and fully optimize and > polish the game. That's why at the end of the V BUCKS generator 2020's life > cycle, even if the hardware function has been stretched out, it can also > present working free V BUCKS codes easy and without human verification! > FREE V BUCKS HACK NO HUMAN VERIFICATION -FREE V BUCKS GENERATORFREE V BUCKS > GENERATOR NO HUMAN VERIFICATION -FREE V BUCKS GENERATOR 2021 FREE V BUCKS > GENERATOR NO HUMAN VERIFICATION -FREEV BUCKS GENERATOR 2021oblox Hack, Free V > BUCKS Generator, Rux Generator Free, Free V BUCKS Codes -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25586) Fix HBASE-22492 on branch-2 (SASL GapToken)
[ https://issues.apache.org/jira/browse/HBASE-25586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-25586. - Fix Version/s: 3.0.0-alpha-1 Resolution: Fixed > Fix HBASE-22492 on branch-2 (SASL GapToken) > --- > > Key: HBASE-25586 > URL: https://issues.apache.org/jira/browse/HBASE-25586 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 2.2.6 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 3.0.0-alpha-1 > > Attachments: SaslTest.java > > > The issue is also exist on branch-2. > {noformat} > 17:27:41.556 [pool-1-thread-8] WARN > org.apache.hadoop.hbase.client.ScannerCallable - Ignore, probably already > closed. Current scan: > {"startRow":"1999","stopRow":"","batch":20,"cacheBlocks":true,"totalColumns":0,"maxResultSize":"2097152","families":{},"caching":2147483647,"maxVersions":1,"timeRange":["0","9223372036854775807"]} > on table: cluster_test > javax.security.sasl.SaslException: Call to XXX/172.27.162.2:22101 failed on > local exception: javax.security.sasl.SaslException: Gap token > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:224) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:383) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:89) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:414) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) > at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117) > at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132) > at > org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.cleanupCalls(NettyRpcDuplexHandler.java:203) > at > org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.exceptionCaught(NettyRpcDuplexHandler.java:220) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:281) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:273) > at > org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:143) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:302) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:381) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) > at > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) > at > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) >
[jira] [Resolved] (HBASE-26211) [hbase-connectors] Pushdown filters in Spark do not work correctly with long types
[ https://issues.apache.org/jira/browse/HBASE-26211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-26211. - Resolution: Fixed [~hiliev] thanks for the contribution! I merged your fix. > [hbase-connectors] Pushdown filters in Spark do not work correctly with long > types > -- > > Key: HBASE-26211 > URL: https://issues.apache.org/jira/browse/HBASE-26211 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Affects Versions: 1.0.0 >Reporter: Hristo Iliev >Assignee: Hristo Iliev >Priority: Major > Fix For: hbase-connectors-1.1.0 > > > Reading from an HBase table and filtering on a LONG column does not seem to > work correctly. > {{Dataset df = spark.read() > .format("org.apache.hadoop.hbase.spark") > .option("hbase.columns.mapping", "id STRING :key, v LONG cf:v") > ... > .load(); > df.filter("v > 100").show();}} > Expected behaviour is to show rows where cf:v > 100, but instead an empty > dataset is shown. > Moreover, replacing {{"v > 100"}} with {{"v >= 100"}} results in a dataset > where some rows have values of v less than 100. > The problem appears to be that long values are decoded incorrectly as > integers in {{NaiveEncoder.filter}}: > {{case LongEnc | TimestampEnc => > val in = Bytes.toInt(input, offset1) > val value = Bytes.toInt(filterBytes, offset2 + 1) > compare(in.compareTo(value), ops)}} > It looks like that error hasn’t been caught because > {{DynamicLogicExpressionSuite}} lack test cases with long values. > The erroneous code is also present in the master branch. We have extended the > test suite and implemented a quick fix and will PR on GitHub. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26789) Automatically add default security headers to http/rest if SSL enabled
[ https://issues.apache.org/jira/browse/HBASE-26789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-26789. - Resolution: Fixed > Automatically add default security headers to http/rest if SSL enabled > -- > > Key: HBASE-26789 > URL: https://issues.apache.org/jira/browse/HBASE-26789 > Project: HBase > Issue Type: Improvement > Components: REST, UI >Affects Versions: 2.0.6, 2.1.10, 2.2.7, 3.0.0-alpha-2 >Reporter: Andor Molnar >Assignee: Andor Molnar >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.11 > > > In the previous ticket https://issues.apache.org/jira/browse/HBASE-23303 we > implemented these security headers as optional which had to explicitly > enabled in the config. > With this change the headers will automatically be added with meaningful > default values if SSL is enabled. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-27272) Enable code coverage reporting to SonarQube in hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-27272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-27272. - Fix Version/s: hbase-connectors-1.0.1 Resolution: Fixed > Enable code coverage reporting to SonarQube in hbase-connectors > --- > > Key: HBASE-27272 > URL: https://issues.apache.org/jira/browse/HBASE-27272 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Reporter: Dóra Horváth >Assignee: Dóra Horváth >Priority: Minor > Fix For: hbase-connectors-1.0.1 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-22939) SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned on.
[ https://issues.apache.org/jira/browse/HBASE-22939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-22939. - Resolution: Fixed > SpaceQuotas- Bulkload from different hdfs failed when space quotas are turned > on. > - > > Key: HBASE-22939 > URL: https://issues.apache.org/jira/browse/HBASE-22939 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0-alpha-1 >Reporter: Yiran Wu >Assignee: Yiran Wu >Priority: Major > Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4, 2.4.15 > > Attachments: HBASE-22939-v0.patch, HBASE-22939-v1.patch, > HBASE-22939-v2.patch, HBASE-22939_branch-2.patch, HBASE-22939_branch-2.x.patch > > > {code:java} > Caused by: > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): > java.io.IOException: Wrong FS: > hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d, > expected: hdfs://snake > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > Caused by: java.lang.IllegalArgumentException: Wrong FS: > hdfs://172.16.159.148:8020/tmp/bkldOutPut/fm1/327d2de5db4d4f0da667bfdf77105d4d, > expected: hdfs://snake > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:665) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > at > org.apache.hadoop.hbase.quotas.policies.AbstractViolationPolicyEnforcement.getFileSize(AbstractViolationPolicyEnforcement.java:95) > at > org.apache.hadoop.hbase.quotas.policies.MissingSnapshotViolationPolicyEnforcement.computeBulkLoadSize(MissingSnapshotViolationPolicyEnforcement.java:53) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2407) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:374) > ... 3 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27395) Adding description to Prometheus metrics
[ https://issues.apache.org/jira/browse/HBASE-27395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-27395. - Fix Version/s: 2.6.0 3.0.0-alpha-4 Resolution: Fixed > Adding description to Prometheus metrics > > > Key: HBASE-27395 > URL: https://issues.apache.org/jira/browse/HBASE-27395 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 2.6.0, 3.0.0-alpha-3 >Reporter: Luca Kovacs >Assignee: Luca Kovacs >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4 > > > As /jmx endpoint provides enabling description for metrics by using > 'description=true' URL parameter I would like to implement this feature for > the /prometheus endpoint as well. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27406) Make "/prometheus" endpoint accessible from HBase UI
[ https://issues.apache.org/jira/browse/HBASE-27406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-27406. - Resolution: Fixed > Make "/prometheus" endpoint accessible from HBase UI > > > Key: HBASE-27406 > URL: https://issues.apache.org/jira/browse/HBASE-27406 > Project: HBase > Issue Type: Improvement > Components: UI >Affects Versions: 2.6.0, 3.0.0-alpha-3 >Reporter: Luca Kovacs >Assignee: Luca Kovacs >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4 > > > Prometheus metrics feature were added in HBASE-20904, but it's not in the UI > yet, so the only option to use it is the URL. > I would like to make a change in the web UI, so it can be accessed via a > dropdown menu there and I would like to include the description option, so > metrics can be viewed with or without description. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27435) Make Prometheus metrics queryable
[ https://issues.apache.org/jira/browse/HBASE-27435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-27435. - Fix Version/s: 2.6.0 3.0.0-alpha-4 Resolution: Fixed > Make Prometheus metrics queryable > - > > Key: HBASE-27435 > URL: https://issues.apache.org/jira/browse/HBASE-27435 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Luca Kovacs >Assignee: Luca Kovacs >Priority: Minor > Fix For: 2.6.0, 3.0.0-alpha-4 > > > To provide the same features that JMX has, I would like to implement querying > in Prometheus metrics. > This includes the *_qry_* URL parameter which can be used with the Prometheus > name of the metric. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27673) Fix mTLS client authentication
Balazs Meszaros created HBASE-27673: --- Summary: Fix mTLS client authentication Key: HBASE-27673 URL: https://issues.apache.org/jira/browse/HBASE-27673 Project: HBase Issue Type: Bug Components: rpc Affects Versions: 3.0.0-alpha-3 Reporter: Balazs Meszaros Assignee: Balazs Meszaros The exception what I get: {noformat} 23/02/22 15:18:06 ERROR tls.HBaseTrustManager: Failed to verify host address: 127.0.0.1 javax.net.ssl.SSLPeerUnverifiedException: Certificate for <127.0.0.1> doesn't match any of the subject alternative names: [***] at org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.matchIPAddress(HBaseHostnameVerifier.java:144) at org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.verify(HBaseHostnameVerifier.java:117) at org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.performHostVerification(HBaseTrustManager.java:143) at org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.checkClientTrusted(HBaseTrustManager.java:97) ... 23/02/22 15:18:06 ERROR tls.HBaseTrustManager: Failed to verify hostname: localhost javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't match any of the subject alternative names: [***] at org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.matchDNSName(HBaseHostnameVerifier.java:159) at org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.verify(HBaseHostnameVerifier.java:119) at org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.performHostVerification(HBaseTrustManager.java:171) at org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.checkClientTrusted(HBaseTrustManager.java:97) ... 23/02/22 15:18:06 WARN ipc.NettyRpcServer: Connection /100.100.124.2:47109; caught unexpected downstream exception. org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Failed to verify both host address and host name at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: javax.net.ssl.SSLHandshakeException: Failed to verify both host address and host name at sun.security.ssl.Alert.createSSLException(Alert.java:131) at sun.security.ssl.TransportContext.fatal(TransportContext.java:324) at sun.security.ssl.TransportContext.fatal(TransportContext.java:267) at sun.security.ssl.TransportContext.fatal(TransportContext.java:262) at sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkClientCerts(CertificateMessage.java:700) at sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(CertificateMessage.java:411) at sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(CertificateMessage.java:375) at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:377) at sun.security.ssl
[jira] [Resolved] (HBASE-27673) Fix mTLS client hostname verification
[ https://issues.apache.org/jira/browse/HBASE-27673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-27673. - Fix Version/s: 2.6.0 3.0.0-alpha-4 Resolution: Fixed > Fix mTLS client hostname verification > - > > Key: HBASE-27673 > URL: https://issues.apache.org/jira/browse/HBASE-27673 > Project: HBase > Issue Type: Bug > Components: rpc >Affects Versions: 3.0.0-alpha-3 >Reporter: Balazs Meszaros >Assignee: Balazs Meszaros >Priority: Major > Fix For: 2.6.0, 3.0.0-alpha-4 > > > The exception what I get: > {noformat} > 23/02/22 15:18:06 ERROR tls.HBaseTrustManager: Failed to verify host address: > 127.0.0.1 > javax.net.ssl.SSLPeerUnverifiedException: Certificate for <127.0.0.1> doesn't > match any of the subject alternative names: [***] > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.matchIPAddress(HBaseHostnameVerifier.java:144) > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.verify(HBaseHostnameVerifier.java:117) > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.performHostVerification(HBaseTrustManager.java:143) > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.checkClientTrusted(HBaseTrustManager.java:97) > ... > 23/02/22 15:18:06 ERROR tls.HBaseTrustManager: Failed to verify hostname: > localhost > javax.net.ssl.SSLPeerUnverifiedException: Certificate for doesn't > match any of the subject alternative names: [***] > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.matchDNSName(HBaseHostnameVerifier.java:159) > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseHostnameVerifier.verify(HBaseHostnameVerifier.java:119) > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.performHostVerification(HBaseTrustManager.java:171) > at > org.apache.hadoop.hbase.io.crypto.tls.HBaseTrustManager.checkClientTrusted(HBaseTrustManager.java:97) > ... > 23/02/22 15:18:06 WARN ipc.NettyRpcServer: Connection /100.100.124.2:47109; > caught unexpected downstream exception. > org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: > javax.net.ssl.SSLHandshakeException: Failed to verify both host address and > host name > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:499) > at > org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) > at > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) > at > org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > at > org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499) > at > org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) > at > org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.lang.Thread.run(Thread.java:750) > Caused by: javax.net.ssl.SSLHandshakeException: Failed to verify both host > address and host name > at sun.security.ssl.Alert.createSSLException(Alert.java:131) > at sun.security.ssl.TransportContext.fatal(TransportContext.java:324) > at sun.security.ssl.TransportContext.fatal(TransportContext.java:267) > at sun.security.ssl.TransportContext.fatal(TransportContext.java:262) > at > sun.security.ssl.Cert
[jira] [Resolved] (HBASE-27680) Bump hbase, hbase-thirdparty, hadoop and spark for hbase-connectors
[ https://issues.apache.org/jira/browse/HBASE-27680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros resolved HBASE-27680. - Fix Version/s: hbase-connectors-1.0.1 Resolution: Fixed > Bump hbase, hbase-thirdparty, hadoop and spark for hbase-connectors > --- > > Key: HBASE-27680 > URL: https://issues.apache.org/jira/browse/HBASE-27680 > Project: HBase > Issue Type: Task > Components: hbase-connectors >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: hbase-connectors-1.0.1 > > > Will update following as part of this JIRA > * HBase: 2.4.16 > * HBase thirdparty: 4.1.4 (See > [https://github.com/apache/hbase/blob/d1714710877653691e2125bd94b68a5b484a3a06/pom.xml#L634]) > * Hadoop3: 3.2.4 (We are bumping to latest stable 3.2.x) > * Hadoop2: 2.10.0 (See > * > [https://github.com/apache/hbase/blob/d1714710877653691e2125bd94b68a5b484a3a06/pom.xml#L543]) > * Spark: 3.2.3 > * Scala binary: 2.12 (See > [https://github.com/apache/spark/blob/b53c341e0fefbb33d115ab630369a18765b7763d/pom.xml#L164]) > > * Scala: 2.12.15 (See > [https://github.com/apache/spark/blob/b53c341e0fefbb33d115ab630369a18765b7763d/pom.xml#L163]) -- This message was sent by Atlassian Jira (v8.20.10#820010)