[jira] [Created] (HBASE-28434) Update assembly to create a tarball with hadoop and without hadoop
Nihal Jain created HBASE-28434: -- Summary: Update assembly to create a tarball with hadoop and without hadoop Key: HBASE-28434 URL: https://issues.apache.org/jira/browse/HBASE-28434 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain The goal of this task is to update the HBase assembly by providing two distinct variants - one that includes Hadoop and one that does not. Currently, our assembly includes a substantial amount of the Hadoop distribution. This task involves modifying our build and assembly process to create two separate distributions of HBase: * A variant that includes Hadoop, serving as a complete package for users who do not have a pre-existing Hadoop installation. * A leaner variant without Hadoop, suitable for environments where Hadoop is already installed and configured. This change aims to reduce the distribution size, speed up startup times, and decrease the chance of conflicts with the Hadoop jars. It also aims to reduce the number of CVE-prone JARs in the binary assemblies. The task includes ensuring that both variants function correctly in their respective scenarios and that existing functionality is not negatively impacted. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28433) Modify the assembly to not include test jars and their transitive dependencies
Nihal Jain created HBASE-28433: -- Summary: Modify the assembly to not include test jars and their transitive dependencies Key: HBASE-28433 URL: https://issues.apache.org/jira/browse/HBASE-28433 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28432) Move tools which are under test packaging to hbase-tools
Nihal Jain created HBASE-28432: -- Summary: Move tools which are under test packaging to hbase-tools Key: HBASE-28432 URL: https://issues.apache.org/jira/browse/HBASE-28432 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Initially will prepare a list of tools having HBaseInterfaceAudience.TOOLS under test like: * https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java * https://github.com/apache/hbase/blob/936d267d1094e37222b9b836ab068689ccce3574/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java * https://github.com/apache/hbase/blob/936d267d1094e37222b9b836ab068689ccce3574/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java * https://github.com/apache/hbase/blob/936d267d1094e37222b9b836ab068689ccce3574/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java * https://github.com/apache/hbase/blob/936d267d1094e37222b9b836ab068689ccce3574/hbase-balancer/src/test/java/org/apache/hadoop/hbase/master/balancer/LoadBalancerPerformanceEvaluation.java Above is a list on 1st analysis. Will check more. CC: [~stoty], [~zhangduo], [~ndimiduk], [~bbeaudreault] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28418) [JDK17] Jenkins build support for hbase-operator-tools
Nihal Jain created HBASE-28418: -- Summary: [JDK17] Jenkins build support for hbase-operator-tools Key: HBASE-28418 URL: https://issues.apache.org/jira/browse/HBASE-28418 Project: HBase Issue Type: Improvement Components: hbase-operator-tools, java Reporter: Nihal Jain Assignee: Nihal Jain Fix For: hbase-operator-tools-1.3.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27814) Add support for dump and process metrics servlet in REST InfoServer
[ https://issues.apache.org/jira/browse/HBASE-27814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-27814. Resolution: Fixed > Add support for dump and process metrics servlet in REST InfoServer > --- > > Key: HBASE-27814 > URL: https://issues.apache.org/jira/browse/HBASE-27814 > Project: HBase > Issue Type: Sub-task > Components: REST >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.6.0, 4.0.0-alpha-1, 3.0.0-beta-2 > > > Unlike other HBase Master/RS Info Servers, HBase REST Server does not provide > a way to: > * Get debug dump for quick access to stacks, logs etc. > * Get process metrics like threads, gc collectors etc. > This task is add the above in HBase REST InfoServer. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HBASE-27814) Add support for dump and process metrics servlet in REST InfoServer
[ https://issues.apache.org/jira/browse/HBASE-27814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain reopened HBASE-27814: > Add support for dump and process metrics servlet in REST InfoServer > --- > > Key: HBASE-27814 > URL: https://issues.apache.org/jira/browse/HBASE-27814 > Project: HBase > Issue Type: Sub-task > Components: REST >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > Labels: pull-request-available > Fix For: 2.7.0, 3.0.0-beta-2 > > > Unlike other HBase Master/RS Info Servers, HBase REST Server does not provide > a way to: > * Get debug dump for quick access to stacks, logs etc. > * Get process metrics like threads, gc collectors etc. > This task is add the above in HBase REST InfoServer. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28408) Confusing logging during backup restore
[ https://issues.apache.org/jira/browse/HBASE-28408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-28408. Fix Version/s: 2.6.0 4.0.0-alpha-1 3.0.0-beta-2 Hadoop Flags: Reviewed Resolution: Fixed > Confusing logging during backup restore > --- > > Key: HBASE-28408 > URL: https://issues.apache.org/jira/browse/HBASE-28408 > Project: HBase > Issue Type: Bug > Components: backuprestore >Affects Versions: 2.6.0 >Reporter: Dieter De Paepe >Assignee: Dieter De Paepe >Priority: Minor > Labels: pull-request-available > Fix For: 2.6.0, 4.0.0-alpha-1, 3.0.0-beta-2 > > > Encountered this while experimenting with the backup/restore functionality. > My setup was as follows: > * Took several backups (Full1, inc2, inc3) > * Changed an entry in the "lily_tenant_acme:LILY_SETTINGS" table > * Attempt a restore (to test if my changed entry is reverted): > {code:java} > $ hbase restore -conf backup-conf.xml s3a://backuprestore-experiments/hbase > backup_1709123740345 -t "lily_tenant_acme:LILY_SETTINGS" -m > "lily_tenant_acme:LILY_SETTINGS-restored1" -o > 24/02/28 16:15:41 WARN org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil: > The addDependencyJars(Configuration, Class...) method has been deprecated > since it is easy to use incorrectly. Most users should rely on > addDependencyJars(Job) instead. See HBASE-8386 for more details. > 24/02/28 16:15:58 WARN org.apache.hadoop.hbase.tool.LoadIncrementalHFiles: > Skipping non-directory > hdfs://hdfsns/user/lily/hbase-staging/bulk_output-lily_tenant_acme-LILY_SETTINGS-restored1-1709136941410/_SUCCESS > 24/02/28 16:15:59 WARN > org.apache.hadoop.hbase.backup.impl.RestoreTablesClient: Nothing has changed, > so there is no need to restore 'lily_tenant_acme:LILY_SETTINGS' > {code} > Based on the final logging line, I presumed my restore operation had failed. > After some investigation however, I found that this was not the case: my > change was reverted as expected. > Some code investigation learned me this log message is shown because I was > restoring backup `inc3`, and there were no changes between `full1` and `inc3`. > I suggest rephrasing this log message, and changing it to a INFO level. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28393) Update Apache Parent POM to version 31
Nihal Jain created HBASE-28393: -- Summary: Update Apache Parent POM to version 31 Key: HBASE-28393 URL: https://issues.apache.org/jira/browse/HBASE-28393 Project: HBase Issue Type: Task Components: build, dependencies Reporter: Nihal Jain Assignee: Nihal Jain Bump to https://github.com/apache/maven-apache-parent/releases/tag/apache-31 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28392) Bump jackson version to 2.16.1
Nihal Jain created HBASE-28392: -- Summary: Bump jackson version to 2.16.1 Key: HBASE-28392 URL: https://issues.apache.org/jira/browse/HBASE-28392 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28388) Field sorting is broken in HBase Web UI
Nihal Jain created HBASE-28388: -- Summary: Field sorting is broken in HBase Web UI Key: HBASE-28388 URL: https://issues.apache.org/jira/browse/HBASE-28388 Project: HBase Issue Type: Bug Affects Versions: 2.6.0 Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28383) Update hbase-env.sh with alternates to JVM flags which are no longer supported with JDK17
Nihal Jain created HBASE-28383: -- Summary: Update hbase-env.sh with alternates to JVM flags which are no longer supported with JDK17 Key: HBASE-28383 URL: https://issues.apache.org/jira/browse/HBASE-28383 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain Some JVM flags like {{{}-XX:+PrintGCDetails{}}}, {{-XX:+PrintGCDateStamps}} etc. are no longer supported with JDK 17 and hbase would fail to start if these are passed. We should do an audit and update [https://github.com/apache/hbase/blob/master/conf/hbase-env.sh] to capture alternate/fix. Will refer following for a fix/repalacement: * [https://stackoverflow.com/questions/54144713/is-there-a-replacement-for-the-garbage-collection-jvm-args-in-java-11] * [https://docs.oracle.com/javase/9/tools/java.htm#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5__CONVERTGCLOGGINGFLAGSTOXLOG-A5046BD1] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28382) Build hbase-connectors with JDK17
Nihal Jain created HBASE-28382: -- Summary: Build hbase-connectors with JDK17 Key: HBASE-28382 URL: https://issues.apache.org/jira/browse/HBASE-28382 Project: HBase Issue Type: Improvement Components: java, thirdparty Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28381) Build hbase-operator-tools with JDK17
Nihal Jain created HBASE-28381: -- Summary: Build hbase-operator-tools with JDK17 Key: HBASE-28381 URL: https://issues.apache.org/jira/browse/HBASE-28381 Project: HBase Issue Type: Improvement Components: java, thirdparty Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28380) Build hbase-thirdparty with JDK17
Nihal Jain created HBASE-28380: -- Summary: Build hbase-thirdparty with JDK17 Key: HBASE-28380 URL: https://issues.apache.org/jira/browse/HBASE-28380 Project: HBase Issue Type: Task Components: java, thirdparty Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28142) Region Server Logs getting spammed with warning when storefile has no reader
[ https://issues.apache.org/jira/browse/HBASE-28142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-28142. Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.6+. Thanks for the PR [~anchalk1]. Thanks for the review [~chrajeshbab...@gmail.com]. Thanks for reporting [~nikitapande]! > Region Server Logs getting spammed with warning when storefile has no reader > > > Key: HBASE-28142 > URL: https://issues.apache.org/jira/browse/HBASE-28142 > Project: HBase > Issue Type: Improvement >Reporter: Nikita Pande >Assignee: Anchal Kejriwal >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.7.0, 3.0.0-beta-2 > > > Hbase tables which have IS_MOB set as TRUE and table metrics is enabled, > there are warning logs getting generated "StoreFile has a null > Reader on hbase region server. > After setting IS_MOB as false for a table, this logs are not visible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28375) HBase Operator Tools fails to compile with hbase 2.6.0
[ https://issues.apache.org/jira/browse/HBASE-28375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-28375. Hadoop Flags: Reviewed Resolution: Fixed > HBase Operator Tools fails to compile with hbase 2.6.0 > -- > > Key: HBASE-28375 > URL: https://issues.apache.org/jira/browse/HBASE-28375 > Project: HBase > Issue Type: Bug > Components: hbase-operator-tools >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: hbase-operator-tools-1.3.0 > > > HBase Operator Tools fails to compile with hbase 2.6.0. > {code:java} > [ERROR] > /file_path/hbase-operator-tools/hbase-hbck2/src/main/java/org/apache/hbase/hbck1/ReplicationChecker.java:[59,49] > method getReplicationPeerStorage in class > org.apache.hadoop.hbase.replication.ReplicationStorageFactory cannot be > applied to given types; > [ERROR] required: > org.apache.hadoop.fs.FileSystem,org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration > [ERROR] found: > org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration > [ERROR] reason: actual and formal argument lists differ in length {code} > Seems there is a breaking change between > [https://github.com/apache/hbase/blob/branch-2.5/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationStorageFactory.java] > vs > [https://github.com/apache/hbase/blob/branch-2.6/hbase-replication/src/main/java/org/apache/hadoop/hbase/replication/ReplicationStorageFactory.java] > where a public method has been dropped, which is used by operator tools and > hence the build will fail for it. See > [https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/src/main/java/org/apache/hbase/hbck1/ReplicationChecker.java#L58] > where the effected method is invoked. > Since ReplicationStorageFactory is @InterfaceAudience.Private so maybe it is > fine. > Will try to fix and make changes in hbase-operator-tools to fall back to new > method, in case if build with branch-2.6 > CC: [~zhangduo] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28375) Build HBase Operator tool with hbase 2.6.0
Nihal Jain created HBASE-28375: -- Summary: Build HBase Operator tool with hbase 2.6.0 Key: HBASE-28375 URL: https://issues.apache.org/jira/browse/HBASE-28375 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain HBase Operator Tools fails to compile with hbase 2.6.0. {code:java} [ERROR] /Users/nihjain/code/visa/hbase-operator-tools/hbase-hbck2/src/main/java/org/apache/hbase/hbck1/ReplicationChecker.java:[59,49] method getReplicationPeerStorage in class org.apache.hadoop.hbase.replication.ReplicationStorageFactory cannot be applied to given types; [ERROR] required: org.apache.hadoop.fs.FileSystem,org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration [ERROR] found: org.apache.hadoop.hbase.zookeeper.ZKWatcher,org.apache.hadoop.conf.Configuration [ERROR] reason: actual and formal argument lists differ in length {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28368) Backport "HBASE-27693 Support for Hadoop's LDAP Authentication mechanism (Web UI only)" to branch-2
Nihal Jain created HBASE-28368: -- Summary: Backport "HBASE-27693 Support for Hadoop's LDAP Authentication mechanism (Web UI only)" to branch-2 Key: HBASE-28368 URL: https://issues.apache.org/jira/browse/HBASE-28368 Project: HBase Issue Type: New Feature Reporter: Yash Dodeja Assignee: Yash Dodeja Fix For: 3.0.0-alpha-4 Hadoop's AuthenticationFilter has changed and now has support for ldap mechanism too. HBase still uses an older version tightly coupled with kerberos and spnego as the only auth mechanisms. HADOOP-12082 has added support for multiple auth handlers including LDAP. On trying to use Hadoop's AuthenticationFilterInitializer in hbase.http.filter.initializers, there is a casting exception as HBase requires it to extend org.apache.hadoop.hbase.http.FilterInitializer. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28367) Backport "HBASE-27811 Enable cache control for logs endpoint and set max age as 0" to branch-2
Nihal Jain created HBASE-28367: -- Summary: Backport "HBASE-27811 Enable cache control for logs endpoint and set max age as 0" to branch-2 Key: HBASE-28367 URL: https://issues.apache.org/jira/browse/HBASE-28367 Project: HBase Issue Type: Improvement Reporter: Yash Dodeja Assignee: Yash Dodeja Fix For: 3.0.0-alpha-4 Not setting the proper header values may cause browsers to store pages within their respective caches. On public, shared, or any other non-private computers, a malicious person may search through the browser cache to locate sensitive information cached during another user's session. /logs endpoint contains sensitive information that an attacker can exploit. Any page with sensitive information needs to have the following headers in response: Cache-Control: no-cache, no-store, max-age=0 Pragma: no-cache Expires: -1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28311) Few ITs (using MiniMRYarnCluster on hadoop-2) are failing due to NCDFE: com/sun/jersey/core/util/FeaturesAndProperties
Nihal Jain created HBASE-28311: -- Summary: Few ITs (using MiniMRYarnCluster on hadoop-2) are failing due to NCDFE: com/sun/jersey/core/util/FeaturesAndProperties Key: HBASE-28311 URL: https://issues.apache.org/jira/browse/HBASE-28311 Project: HBase Issue Type: Bug Reporter: Nihal Jain Assignee: Nihal Jain Found this while trying to run tests for HBASE-28301 locally, On branch-2 where hadoop 2 is default, the specified tests don't even run as MiniMRYarnCluster itself fails to start. For example saw this while trying to run IntegrationTestImportTsv: {code:java} 2024-01-12T01:10:13,486 ERROR [Thread-221 {}] log.Slf4jLog(87): Error starting handlers java.lang.NoClassDefFoundError: com/sun/jersey/core/util/FeaturesAndProperties at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_381] at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_381] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.8.0_381] at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) ~[?:1.8.0_381] at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_381] at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_381] at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_381] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_381] at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_381] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_381] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[?:1.8.0_381] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_381] at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_381] at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_381] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.8.0_381] at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) ~[?:1.8.0_381] at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_381] at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_381] at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_381] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_381] at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_381] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_381] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[?:1.8.0_381] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_381] at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_381] at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_381] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.8.0_381] at java.net.URLClassLoader.defineClass(URLClassLoader.java:473) ~[?:1.8.0_381] at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_381] at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_381] at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_381] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_381] at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_381] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_381] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[?:1.8.0_381] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_381] at java.lang.Class.getDeclaredConstructors0(Native Method) ~[?:1.8.0_381] at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671) ~[?:1.8.0_381] at java.lang.Class.getDeclaredConstructors(Class.java:2020) ~[?:1.8.0_381] at com.google.inject.spi.InjectionPoint.forConstructorOf(InjectionPoint.java:243) ~[guice-3.0.jar:?] at com.google.inject.internal.ConstructorBindingImpl.create(ConstructorBindingImpl.java:96) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.createUninitializedBinding(InjectorImpl.java:629) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.createJustInTimeBinding(InjectorImpl.java:845) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.createJustInTimeBindingRecursive(InjectorImpl.java:772) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.getJustInTimeBinding(InjectorImpl.java:256) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.getBindingOrThrow(InjectorImpl.java:205) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.getBinding(InjectorImpl.java:146) ~[guice-3.0.jar:?] at com.google.inject.internal.InjectorImpl.getBinding(InjectorImpl.java:66)
[jira] [Created] (HBASE-28301) IntegrationTestImportTsv fails with UnsupportedOperationException
Nihal Jain created HBASE-28301: -- Summary: IntegrationTestImportTsv fails with UnsupportedOperationException Key: HBASE-28301 URL: https://issues.apache.org/jira/browse/HBASE-28301 Project: HBase Issue Type: Bug Reporter: Nihal Jain Assignee: Nihal Jain IntegrationTestImportTsv fails with UnsupportedOperationException {code:java} [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 337.526 s <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv [ERROR] org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad Time elapsed: 279.783 s <<< ERROR! java.lang.UnsupportedOperationException: Unable to find suitable constructor for class org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv$2 at org.apache.hadoop.hbase.util.ReflectionUtils.findConstructor(ReflectionUtils.java:133) at org.apache.hadoop.hbase.util.ReflectionUtils.newInstance(ReflectionUtils.java:98) at org.apache.hadoop.hbase.client.RawAsyncTableImpl.getScanner(RawAsyncTableImpl.java:628) at org.apache.hadoop.hbase.client.RawAsyncTableImpl.getScanner(RawAsyncTableImpl.java:90) at org.apache.hadoop.hbase.client.TableOverAsyncTable.getScanner(TableOverAsyncTable.java:198) at org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.doLoadIncrementalHFiles(IntegrationTestImportTsv.java:156) at org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.generateAndLoad(IntegrationTestImportTsv.java:206) at org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:187) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495) [INFO] [INFO] Results: [INFO] [ERROR] Errors: [ERROR] IntegrationTestImportTsv.testGenerateAndLoad:187->generateAndLoad:206->doLoadIncrementalHFiles:156 » UnsupportedOperation Unable to find suitable constructor for class org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv$2 {code} -- This message was sent by Atlassian Jira
[jira] [Created] (HBASE-28300) Refactor GarbageCollectorMXBean instantiation in process*.jsp
Nihal Jain created HBASE-28300: -- Summary: Refactor GarbageCollectorMXBean instantiation in process*.jsp Key: HBASE-28300 URL: https://issues.apache.org/jira/browse/HBASE-28300 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain During review of https://github.com/apache/hbase/pull/5215/ we saw that beans are instantiated based on assumptions around JVM, it is a good idea to refactor code so that we don't get errors when JVM assumptions change in future. Review comment: https://github.com/apache/hbase/pull/5215/files#r1318304462 CC: [~ndimiduk] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28299) Set proper error in response for all usages of HttpServer.isInstrumentationAccessAllowed()
Nihal Jain created HBASE-28299: -- Summary: Set proper error in response for all usages of HttpServer.isInstrumentationAccessAllowed() Key: HBASE-28299 URL: https://issues.apache.org/jira/browse/HBASE-28299 Project: HBase Issue Type: Bug Reporter: Nihal Jain Assignee: Nihal Jain During review https://github.com/apache/hbase/pull/5215, it was found we simply return 200 even if instrumentation is not allowed. While at some places we set proper error. This JIRA is to fix usages of the method and set proper response code. CC: [~ndimiduk] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28297) IntegrationTestImportTsv is broken
Nihal Jain created HBASE-28297: -- Summary: IntegrationTestImportTsv is broken Key: HBASE-28297 URL: https://issues.apache.org/jira/browse/HBASE-28297 Project: HBase Issue Type: Bug Components: integration tests, test Reporter: Nihal Jain Assignee: Nihal Jain While trying to fix HBASE-28295, found issues in IntegrationTestImportTsv {code:java} INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.hadoop.hbase.mapreduce.IntegrationTestFileBasedSFTBulkLoad [INFO] Running org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList [INFO] Running org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv [INFO] Running org.apache.hadoop.hbase.test.IntegrationTestLoadAndVerify [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.78 s <<< FAILURE! - in org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv [ERROR] org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv Time elapsed: 0.772 s <<< ERROR! java.lang.ExceptionInInitializerError at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495) Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv$1.(IntegrationTestImportTsv.java:90) at org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.(IntegrationTestImportTsv.java:83) ... 20 more[ERROR] org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv Time elapsed: 0.772 s <<< ERROR! java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:316) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:240) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:214) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:155) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385) at
[jira] [Reopened] (HBASE-28295) Few tests are failing due to NCDFE: org/bouncycastle/operator/OperatorCreationException
[ https://issues.apache.org/jira/browse/HBASE-28295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain reopened HBASE-28295: Earlier reported tests have passed but a new one is coming in latest nightly build. Not sure how this was not reported in last build though: [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/24/] [Test Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/24/testReport/] (2 failures / -4) * [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.backup.TestBackupSmallTests.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/24/testReport/junit/org.apache.hadoop.hbase.backup/TestBackupSmallTests/health_checks___yetus_jdk11_hadoop3_checks__/] Reopening for an addendum fix! > Few tests are failing due to NCDFE: > org/bouncycastle/operator/OperatorCreationException > --- > > Key: HBASE-28295 > URL: https://issues.apache.org/jira/browse/HBASE-28295 > Project: HBase > Issue Type: Bug > Components: build, dependencies, hadoop3 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-2 > > > See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/] for > branch-2.6 > * [Test > Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/] > (6 failures / +4) > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/] > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/] > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] > See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/] for > branch-2 > * [Test > Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/] > (8 failures / +7) > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/] > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/] > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] > ** [health checks / yetus jdk11 hadoop3 checks / > org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] > ** [health checks / yetus jdk8 hadoop3 checks / > org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk8_hadoop3_checks___testMRYarnConfigsPopulation/] > ** [health checks / yetus jdk8 hadoop3 checks / >
[jira] [Created] (HBASE-28295) Fews tests are failing due to NCDFE: org/bouncycastle/operator/OperatorCreationException
Nihal Jain created HBASE-28295: -- Summary: Fews tests are failing due to NCDFE: org/bouncycastle/operator/OperatorCreationException Key: HBASE-28295 URL: https://issues.apache.org/jira/browse/HBASE-28295 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain Fix For: 2.6.0, 3.0.0-beta-2 See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/] for branch-2.6 * [Test Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/] (6 failures / +4) ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/] ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/] ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/23/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] See [https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/] for branch-2 * [Test Result|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/] (8 failures / +7) ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk11_hadoop3_checks___testMRYarnConfigsPopulation/] ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk11_hadoop3_checks__/] ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] ** [health checks / yetus jdk11 hadoop3 checks / org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk11_hadoop3_checks__/] ** [health checks / yetus jdk8 hadoop3 checks / org.apache.hadoop.hbase.mapreduce.TestHBaseMRTestingUtility.testMRYarnConfigsPopulation|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.mapreduce/TestHBaseMRTestingUtility/health_checks___yetus_jdk8_hadoop3_checks___testMRYarnConfigsPopulation/] ** [health checks / yetus jdk8 hadoop3 checks / org.apache.hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.replication/TestVerifyReplicationCrossDiffHdfs/health_checks___yetus_jdk8_hadoop3_checks__/] ** [health checks / yetus jdk8 hadoop3 checks / org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestMobSecureExportSnapshot/health_checks___yetus_jdk8_hadoop3_checks__/] ** [health checks / yetus jdk8 hadoop3 checks / org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot.(?)|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/956/testReport/junit/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/health_checks___yetus_jdk8_hadoop3_checks__/] Also fails locally for me for master. {code:java} [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running
[jira] [Created] (HBASE-28275) Flaky test: Fix 'list decommissioned regionservers' in admin2_test.rb
Nihal Jain created HBASE-28275: -- Summary: Flaky test: Fix 'list decommissioned regionservers' in admin2_test.rb Key: HBASE-28275 URL: https://issues.apache.org/jira/browse/HBASE-28275 Project: HBase Issue Type: Bug Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28273) region_status.rb is broken
Nihal Jain created HBASE-28273: -- Summary: region_status.rb is broken Key: HBASE-28273 URL: https://issues.apache.org/jira/browse/HBASE-28273 Project: HBase Issue Type: Sub-task Affects Versions: 2.5.7, 3.0.0-alpha-4, 2.6.0 Reporter: Nihal Jain Assignee: Nihal Jain {{region_status.rb}} which is broken by all ends on all active branches. Need to thoroughly fix it as it has multiple errors. Not sure who uses it though as this is broken in branch-2 as well. We should maybe deprecate and remove it. CC: [~zhangduo] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28269) Ruby scripts are broken as they reference class which do not exit
Nihal Jain created HBASE-28269: -- Summary: Ruby scripts are broken as they reference class which do not exit Key: HBASE-28269 URL: https://issues.apache.org/jira/browse/HBASE-28269 Project: HBase Issue Type: Bug Affects Versions: 3.0.0-alpha-4 Reporter: Nihal Jain Assignee: Nihal Jain Some of the ruby scripts are broken in 3.x as they are referencing non-existent classes: * {{org.apache.hadoop.hbase.client.HBaseAdmin}} * {{org.apache.hadoop.hbase.HTableDescriptor}} Following 4 scripts are failing: {code:java} NameError: missing class name org.apache.hadoop.hbase.client.HBaseAdmin method_missing at org/jruby/javasupport/JavaPackage.java:253 at region_status.rb:50 {code} {code:java} {NameError: missing class name org.apache.hadoop.hbase.HTableDescriptor method_missing at org/jruby/javasupport/JavaPackage.java:253 at replication/copy_tables_desc.rb:30 {code} {code:java} NameError: missing class name org.apache.hadoop.hbase.client.HBaseAdmin method_missing at org/jruby/javasupport/JavaPackage.java:253 at draining_servers.rb:28 {code} {code:java} NameError: missing class name org.apache.hadoop.hbase.client.HBaseAdmin method_missing at org/jruby/javasupport/JavaPackage.java:253 at shutdown_regionserver.rb:27 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28250) Bump jruby to 9.4.5.0 and related joni and jcodings
Nihal Jain created HBASE-28250: -- Summary: Bump jruby to 9.4.5.0 and related joni and jcodings Key: HBASE-28250 URL: https://issues.apache.org/jira/browse/HBASE-28250 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain Given branch-2 including branch-2.6 is already on 9.3.9.0, we should bump to at least 9.3.13.0. This will fix the bundled *org.bouncycastle : bcprov-jdk18on : 1.71* having [CVE-2023-33201|https://nvd.nist.gov/vuln/detail/CVE-2023-33201] from out classpath for the least. As a follow up can try to bump to latest 9.4.x line, if others are fine with this. Please let me know what others think. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28249) Bump jruby to 9.3.13.0 and related joni and jcodings to 2.2.1 and 1.0.58 respectively
Nihal Jain created HBASE-28249: -- Summary: Bump jruby to 9.3.13.0 and related joni and jcodings to 2.2.1 and 1.0.58 respectively Key: HBASE-28249 URL: https://issues.apache.org/jira/browse/HBASE-28249 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain Given branch-2 including is already on 9.3.9.0, we should bump to atleast 9.3.13.0. This will fix the bundled *org.bouncycastle : bcprov-jdk18on : 1.71* having CVE-2023-33201 from out classpath for the least. As a follow up can try to bump to latest 9.4.x line, if others are fine with this. Please let me know what others think. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28245) Sync internal protobuf version for hbase to be same as hbase-thirdparty
Nihal Jain created HBASE-28245: -- Summary: Sync internal protobuf version for hbase to be same as hbase-thirdparty Key: HBASE-28245 URL: https://issues.apache.org/jira/browse/HBASE-28245 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28243) Bump jackson version to 2.15.2
Nihal Jain created HBASE-28243: -- Summary: Bump jackson version to 2.15.2 Key: HBASE-28243 URL: https://issues.apache.org/jira/browse/HBASE-28243 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain We should bump jackson to 2.15.2 as it is already move to this in hbase-thirdparty in HBASE-28093 Also 2.14.1 has [sonatype-2022-6438.|https://github.com/FasterXML/jackson-core/issues/861] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28173) Make use of assertThrows in TestShadeSaslAuthenticationProvider
Nihal Jain created HBASE-28173: -- Summary: Make use of assertThrows in TestShadeSaslAuthenticationProvider Key: HBASE-28173 URL: https://issues.apache.org/jira/browse/HBASE-28173 Project: HBase Issue Type: Task Components: security, test Reporter: Duo Zhang Assignee: Nihal Jain The testNegativeAuthentication method is completely different between master/branch-3 and branch-2.x, we should try to align the test for these branches. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28160) Build fails with Hadoop 3.3.5 and higher
[ https://issues.apache.org/jira/browse/HBASE-28160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-28160. Resolution: Duplicate Hi [~larsfrancke] this looks like a duplicate issue and has already been fixed with HBASE-27860. This fix was released as part of 2.4.18. Also the master failure I could not reproduce: Did you run with {{-Dhadoop.profile=3.0}} by any chance? Could you try running below for master: {code:java} mvn clean install -DskipTests -Phadoop-3.0 -Dhadoop-three.version=3.3.5 {code} Feel free to create another JIRA if {{(Found Banned Dependency: org.bouncycastle:bcprov-jdk15on:jar:1.52)}} is still thrown. > Build fails with Hadoop 3.3.5 and higher > > > Key: HBASE-28160 > URL: https://issues.apache.org/jira/browse/HBASE-28160 > Project: HBase > Issue Type: Bug >Affects Versions: 2.4.17 >Reporter: Lars Francke >Priority: Minor > > https://issues.apache.org/jira/browse/HADOOP-15983 changed dependencies and > that makes our {{check-jar-contents-for-stuff-with-hadoop}} check fail: > Excerpt: > {noformat} > [INFO] --- exec-maven-plugin:1.6.0:exec > (check-jar-contents-for-stuff-with-hadoop) @ > hbase-shaded-with-hadoop-check-invariants --- > [ERROR] Found artifact with unexpected contents: > '/home/lars/Downloads/hbase/hbase-2.4.17-src/hbase-shaded/hbase-shaded-client/target/hbase-shaded-client-2.4.17.jar' > Please check the following and either correct the build or update > the allowed list with reasoning. > com/ > com/sun/ > com/sun/jersey/ > com/sun/jersey/json/ > com/sun/jersey/json/impl/ > com/sun/jersey/json/impl/reader/ > com/sun/jersey/json/impl/reader/JsonXmlEvent$Attribute.class > com/sun/jersey/json/impl/reader/JsonXmlStreamReader$1.class > com/sun/jersey/json/impl/reader/XmlEventProvider$1.class > com/sun/jersey/json/impl/reader/NaturalNotationEventProvider.class > com/sun/jersey/json/impl/reader/XmlEventProvider.class > com/sun/jersey/json/impl/reader/XmlEventProvider$ProcessingInfo.class > com/sun/jersey/json/impl/reader/StartElementEvent.class > com/sun/jersey/json/impl/reader/CharactersEvent.class > com/sun/jersey/json/impl/reader/JacksonRootAddingParser$1.class > com/sun/jersey/json/impl/reader/EndElementEvent.class > com/sun/jersey/json/impl/reader/JsonXmlStreamReader.class > com/sun/jersey/json/impl/reader/StaxLocation.class > com/sun/jersey/json/impl/reader/JsonNamespaceContext.class > com/sun/jersey/json/impl/reader/JsonXmlEvent.class > com/sun/jersey/json/impl/reader/JacksonRootAddingParser.class > com/sun/jersey/json/impl/reader/StartDocumentEvent.class > com/sun/jersey/json/impl/reader/MappedNotationEventProvider.class > com/sun/jersey/json/impl/reader/EndDocumentEvent.class > com/sun/jersey/json/impl/reader/JsonFormatException.class > com/sun/jersey/json/impl/reader/XmlEventProvider$CachedJsonParser.class > com/sun/jersey/json/impl/reader/JacksonRootAddingParser$State.class > com/sun/jersey/json/impl/JaxbRiXmlStructure.class > com/sun/jersey/json/impl/ImplMessages.class > com/sun/jersey/json/impl/JSONMarshallerImpl.class > com/sun/jersey/json/impl/NameUtil.class > com/sun/jersey/json/impl/FilteringInputStream.class > com/sun/jersey/json/impl/JaxbProvider.class > [] > {noformat} > I'm afraid I'm a bit at a loss with the current Maven build system as to what > the actual fix would be. > I tested it against 2.4.17 as well as master as of today. Master already > fails in an earlier step ({{Found Banned Dependency: > org.bouncycastle:bcprov-jdk15on:jar:1.52}}) which I assume is a separate > issue but I further assume that it would also fail at this step if it were to > get this far. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28106) TestShadeSaslAuthenticationProvider fails for branch-2.5 and branch-2.4
Nihal Jain created HBASE-28106: -- Summary: TestShadeSaslAuthenticationProvider fails for branch-2.5 and branch-2.4 Key: HBASE-28106 URL: https://issues.apache.org/jira/browse/HBASE-28106 Project: HBase Issue Type: Bug Reporter: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28102) [hbase-thirdparty] Bump hbase.stable.version to 2.5.5 in hbase-noop-htrace
Nihal Jain created HBASE-28102: -- Summary: [hbase-thirdparty] Bump hbase.stable.version to 2.5.5 in hbase-noop-htrace Key: HBASE-28102 URL: https://issues.apache.org/jira/browse/HBASE-28102 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28089) Upgrade BouncyCastle to fix CVE-2023-33201
Nihal Jain created HBASE-28089: -- Summary: Upgrade BouncyCastle to fix CVE-2023-33201 Key: HBASE-28089 URL: https://issues.apache.org/jira/browse/HBASE-28089 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain HBase has a dependency on BouncyCastle 1.70 which is vulnerable with [CVE-2023-33201|https://nvd.nist.gov/vuln/detail/CVE-2023-33201] Advisory: [https://github.com/bcgit/bc-java/wiki/CVE-2023-33201] This JIRA's goal is to fix the following: * Upgrade to v1.76, the latest version. ** This requires bcprov-jdk15on to be replaced with bcprov-jdk18on ** See [https://www.bouncycastle.org/latest_releases.html] *** {quote}*Java Version Details* With the arrival of Java 15. jdk15 is not quite as unambiguous as it was. The *jdk18on* jars are compiled to work with *anything* from Java 1.8 up. They are also multi-release jars so do support some features that were introduced in Java 9, Java 11, and Java 15. If you have issues with multi-release jars see the jdk15to18 release jars below. *Packaging Change (users of 1.70 or earlier):* BC 1.71 changed the jdk15on jars to jdk18on so the base has now moved to Java 8. For earlier JVMs, or containers/applications that cannot cope with multi-release jars, you should now use the jdk15to18 jars. {quote} * Exclude bcprov-jdk15on from everywhere else to avoid conflicts with bcprov-jdk18on -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28066) Move TestShellRSGroups.java inside /src/test/java
Nihal Jain created HBASE-28066: -- Summary: Move TestShellRSGroups.java inside /src/test/java Key: HBASE-28066 URL: https://issues.apache.org/jira/browse/HBASE-28066 Project: HBase Issue Type: Test Reporter: Nihal Jain Assignee: Nihal Jain Just noticed that {{TestShellRSGroups.java}} is at {{hbase-shell/src/test/rsgroup/org/apache/hadoop/hbase/client/rsgroup/TestShellRSGroups.java,}} but ideally it should be at {{hbase-shell/src/test/java/org/apache/hadoop/hbase/client/rsgroup/TestShellRSGroups.java}} instead. Also because of same misplacement spotless skipped this file. Also need to run spotless for the same. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27976) [hbase-operator-tools] Add spotless for hbase-operator-tools
[ https://issues.apache.org/jira/browse/HBASE-27976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-27976. Fix Version/s: hbase-operator-tools-1.3.0 Release Note: Before creating a PR for hbase-operator-tools repo, developers can now run 'mvn spotless:apply' to fix code formatting issues . Resolution: Fixed All the sub-tasks are done, marking the Jira as resolved. > [hbase-operator-tools] Add spotless for hbase-operator-tools > > > Key: HBASE-27976 > URL: https://issues.apache.org/jira/browse/HBASE-27976 > Project: HBase > Issue Type: Umbrella > Components: build, hbase-operator-tools >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: hbase-operator-tools-1.3.0 > > > HBase code repo has spotless plugin to check and fix spotless issues > seamlessly, making it easier for developers to fix issue in case the builds > fails due to code formatting. > The goal of this Jira is to integrate spotless with hbase-operator-tools. > * As a 1st step will try to add a plugin to run spotless check via maven > * Next will fix all spotless issues as part of same task or another (as > community suggests) > * Finally will integrate the same to pre-commit build to not let PRs wit > spotless issues get in. (Would need some support/direction on how to do this > as I am not much familiar with the Jenkins and related code.) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28057) [hbase-operator-tools] Run spotless:apply and fix any existing spotless issues
Nihal Jain created HBASE-28057: -- Summary: [hbase-operator-tools] Run spotless:apply and fix any existing spotless issues Key: HBASE-28057 URL: https://issues.apache.org/jira/browse/HBASE-28057 Project: HBase Issue Type: Sub-task Components: build, hbase-operator-tools Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28054) [hbase-connectors] Add spotless in hbase-connectors pre commit build
Nihal Jain created HBASE-28054: -- Summary: [hbase-connectors] Add spotless in hbase-connectors pre commit build Key: HBASE-28054 URL: https://issues.apache.org/jira/browse/HBASE-28054 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28035) ConnectionFactory.createConnection does not work with anything except ThreadPoolExecutor
Nihal Jain created HBASE-28035: -- Summary: ConnectionFactory.createConnection does not work with anything except ThreadPoolExecutor Key: HBASE-28035 URL: https://issues.apache.org/jira/browse/HBASE-28035 Project: HBase Issue Type: Bug Reporter: Nihal Jain This looks like a regression where org.apache.hadoop.hbase.client.ConnectionFactory#createConnection(org.apache.hadoop.conf.Configuration, java.util.concurrent.ExecutorService) even though supports `ExecutorService` (but since HBASE-22244), has stopped working for `ForkJoinPool` and throws `java.lang.ClassCastException: java.util.concurrent.ForkJoinPool cannot be cast to java.util.concurrent.ThreadPoolExecutor` I have been able to write a UT to verify the same and ran it on a branch not having above change i.e. branch-2.1 where the test passes while for branch-2, having this change, the test fails. Also it worth noting that the issue does not exist in master, i think its because of HBASE-21723 which removes `ConnectionImplementation` from master. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28034) Rewrite hbck2 documentation using ChatGPT
Nihal Jain created HBASE-28034: -- Summary: Rewrite hbck2 documentation using ChatGPT Key: HBASE-28034 URL: https://issues.apache.org/jira/browse/HBASE-28034 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain Just a thought, could we re-write the operator tools [README.md|https://github.com/apache/hbase-operator-tools/blob/master/README.md] using ChatGPT and make it better? A sample paragraph re-written by ChatGPT is as follows: Original: {quote} h3. Some General Principals When making repair, make sure hbase:meta is consistent first before you go about fixing any other issue type such as a filesystem deviance. Deviance in the filesystem or problems with assign should be addressed after the hbase:meta has been put in order. If hbase:meta is out of whack, the Master cannot make proper placements when adopting orphan filesystem data or making region assignments. Other general principles to keep in mind include a Region can not be assigned if it is in _CLOSING_ state (or the inverse, unassigned if in _OPENING_ state) without first transitioning via {_}CLOSED{_}: Regions must always move from {_}CLOSED{_}, to {_}OPENING{_}, to {_}OPEN{_}, and then to {_}CLOSING{_}, {_}CLOSED{_}. When making repair, do fixup of a table-at-a-time. Also, if a table is {_}DISABLED{_}, you cannot assign a Region. In the Master logs, you will see that the Master will report that the assign has been skipped because the table is {_}DISABLED{_}. You may want to assign a Region because it is currently in the _OPENING_ state and you want it in the _CLOSED_ state so it agrees with the table's _DISABLED_ state. In this situation, you may have to temporarily set the table status to {_}ENABLED{_}, just so you can do the assign, and then set it back again after the unassign. _HBCK2_ has facility to allow you do this. See the _HBCK2_ usage output. What follows is a mix of notes and prescription that comes of experience running hbase-2.x so far. The root issues that brought on states described below has been fixed in later versions of hbase so upgrade if you can so as to avoid scenarios described. {quote} Rephrased: {quote}*Some Basic Principles* Ensure hbase:meta's consistency before attempting to fix any other type of problem, such as filesystem deviations. Issues related to filesystem or assignment should be addressed once hbase:meta is sorted out. If hbase:meta is awry, the Master cannot accurately allocate orphan filesystem data or region assignments. Remember that a Region cannot be assigned if it is in a CLOSING state (or conversely, unassigned if in an OPENING state) without going through a CLOSED state first. Regions must always progress from CLOSED, to OPENING, to OPEN, to CLOSING, and then back to CLOSED. Make repairs table-by-table. Additionally, if a table is DISABLED, you cannot assign a Region. The Master logs will indicate that the assignment has been bypassed due to the table's DISABLED status. You might want to assign a Region because it is currently in the OPENING state, but you want it in the CLOSED state to match the DISABLED state of the table. In such cases, you might need to briefly change the table status to ENABLED to make the assignment, then switch it back after the unassignment. HBCK2 provides a facility for this. Please refer to the HBCK2 usage output. The following notes and instructions come from the experience of running hbase-2.x so far. The underlying issues causing the states described below have been resolved in later versions of hbase, so upgrading is recommended to avoid these scenarios. {quote} Is this worth the effort? Or do others feel current doc is good and does not need any refinement? It may require some effort, as we may only start with first commit with untouched document generated by ChatGPT, but then the draft would need to be worked upon, based on some proofreading by the contributor and reviewers. Curious to know how others feel. Also, Apache has some guidelines around using of generative ai tools at [https://www.apache.org/legal/generative-tooling.html] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28032) Fix ChaosMonkey documentation code block rendering
Nihal Jain created HBASE-28032: -- Summary: Fix ChaosMonkey documentation code block rendering Key: HBASE-28032 URL: https://issues.apache.org/jira/browse/HBASE-28032 Project: HBase Issue Type: Task Components: documentation Reporter: Nihal Jain Assignee: Nihal Jain The code blocks in document for ChaosMonkey isnot rendered correctly. Fix them and also add few more example. See [https://hbase.apache.org/book.html#_chaosmonkey_without_ssh] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28026) DefaultMetricsSystemInitializer should be called during HMaster or HRegionServer creation
Nihal Jain created HBASE-28026: -- Summary: DefaultMetricsSystemInitializer should be called during HMaster or HRegionServer creation Key: HBASE-28026 URL: https://issues.apache.org/jira/browse/HBASE-28026 Project: HBase Issue Type: Bug Components: metrics Reporter: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28007) [hbase-connectors] Manually fix javadoc messed due to spotless
Nihal Jain created HBASE-28007: -- Summary: [hbase-connectors] Manually fix javadoc messed due to spotless Key: HBASE-28007 URL: https://issues.apache.org/jira/browse/HBASE-28007 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28006) [hbase-connectors] Run spotless:apply on code base
Nihal Jain created HBASE-28006: -- Summary: [hbase-connectors] Run spotless:apply on code base Key: HBASE-28006 URL: https://issues.apache.org/jira/browse/HBASE-28006 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27980) Sync the hbck2 README page with hbck2 command help output
Nihal Jain created HBASE-27980: -- Summary: Sync the hbck2 README page with hbck2 command help output Key: HBASE-27980 URL: https://issues.apache.org/jira/browse/HBASE-27980 Project: HBase Issue Type: Task Components: hbase-operator-tools, hbck2 Reporter: Nihal Jain Assignee: Nihal Jain There are major differences in the hbck2 [README.md|https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md] and the command help output, hence we should sync them across all command. It should be same as the output of hbck2 help command for ease of maintenance. Also few new commands like {{recoverUnknown}} and {{regionInfoMismatch}} are missing, making users unaware of existence of those. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27906) Fix the javadoc for SyncFutureCache
[ https://issues.apache.org/jira/browse/HBASE-27906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-27906. Fix Version/s: 4.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Thanks for your first contribution [~dimitrios.efthymiou]. The PR has been merged to codebase. > Fix the javadoc for SyncFutureCache > --- > > Key: HBASE-27906 > URL: https://issues.apache.org/jira/browse/HBASE-27906 > Project: HBase > Issue Type: Improvement > Components: documentation >Reporter: Duo Zhang >Assignee: Dimitrios Efthymiou >Priority: Minor > Fix For: 4.0.0-alpha-1 > > > It does not have any html markers so spotless messed it up... > We should add html markers so it could keep the format after 'spotless:apply' > {code} > /** > * A cache of {@link SyncFuture}s. This class supports two methods > * {@link SyncFutureCache#getIfPresentOrNew()} and {@link > SyncFutureCache#offer()}. > * > * Usage pattern: > * > * > * SyncFuture sf = syncFutureCache.getIfPresentOrNew(); > * sf.reset(...); > * // Use the sync future > * finally: syncFutureCache.offer(sf); > * > * > * Offering the sync future back to the cache makes it eligible for reuse > within the same thread > * context. Cache keyed by the accessing thread instance and automatically > invalidated if it remains > * unused for {@link SyncFutureCache#SYNC_FUTURE_INVALIDATION_TIMEOUT_MINS} > minutes. > */ > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27978) [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit build
Nihal Jain created HBASE-27978: -- Summary: [hbase-operator-tools] Add spotless in hbase-operator-tools pre-commit build Key: HBASE-27978 URL: https://issues.apache.org/jira/browse/HBASE-27978 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27977) [hbase-operator-tools] Add spotless plugin to hbase-operator-tools pom
Nihal Jain created HBASE-27977: -- Summary: [hbase-operator-tools] Add spotless plugin to hbase-operator-tools pom Key: HBASE-27977 URL: https://issues.apache.org/jira/browse/HBASE-27977 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27976) [hbase-operator-tools] Add spotless for hbase-operator-tools
Nihal Jain created HBASE-27976: -- Summary: [hbase-operator-tools] Add spotless for hbase-operator-tools Key: HBASE-27976 URL: https://issues.apache.org/jira/browse/HBASE-27976 Project: HBase Issue Type: Task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27966) HBase Master/RS JVM metrics populated incorrectly
Nihal Jain created HBASE-27966: -- Summary: HBase Master/RS JVM metrics populated incorrectly Key: HBASE-27966 URL: https://issues.apache.org/jira/browse/HBASE-27966 Project: HBase Issue Type: Bug Components: metrics Affects Versions: 2.0.0-alpha-4 Reporter: Nihal Jain Assignee: Nihal Jain HBase Master/RS JVM metrics populated incorrectly due to regression causing ambari metrics system to not able to capture them. Based on my analysis the issue is relevant for all release post 2.0.0-alpha-4 and seems to be caused due to HBASE-18846. Have been able to compare the JVM metrics across 3 versions of HBase and attaching results of same below: HBase: 1.1.2 {code:java} { "name" : "Hadoop:service=HBase,name=JvmMetrics", "modelerType" : "JvmMetrics", "tag.Context" : "jvm", "tag.ProcessName" : "RegionServer", "tag.SessionId" : "", "tag.Hostname" : "HOSTNAME", "MemNonHeapUsedM" : 196.05664, "MemNonHeapCommittedM" : 347.60547, "MemNonHeapMaxM" : 4336.0, "MemHeapUsedM" : 7207.315, "MemHeapCommittedM" : 66080.0, "MemHeapMaxM" : 66080.0, "MemMaxM" : 66080.0, "GcCount" : 3953, "GcTimeMillis" : 662520, "ThreadsNew" : 0, "ThreadsRunnable" : 214, "ThreadsBlocked" : 0, "ThreadsWaiting" : 626, "ThreadsTimedWaiting" : 78, "ThreadsTerminated" : 0, "LogFatal" : 0, "LogError" : 0, "LogWarn" : 0, "LogInfo" : 0 }, {code} HBase 2.0.2 {code:java} { "name" : "Hadoop:service=HBase,name=JvmMetrics", "modelerType" : "JvmMetrics", "tag.Context" : "jvm", "tag.ProcessName" : "IO", "tag.SessionId" : "", "tag.Hostname" : "HOSTNAME", "MemNonHeapUsedM" : 203.86688, "MemNonHeapCommittedM" : 740.6953, "MemNonHeapMaxM" : -1.0, "MemHeapUsedM" : 14879.477, "MemHeapCommittedM" : 31744.0, "MemHeapMaxM" : 31744.0, "MemMaxM" : 31744.0, "GcCount" : 75922, "GcTimeMillis" : 5134691, "ThreadsNew" : 0, "ThreadsRunnable" : 90, "ThreadsBlocked" : 3, "ThreadsWaiting" : 158, "ThreadsTimedWaiting" : 36, "ThreadsTerminated" : 0, "LogFatal" : 0, "LogError" : 0, "LogWarn" : 0, "LogInfo" : 0 }, {code} HBase: 2.5.2 {code:java} { "name": "Hadoop:service=HBase,name=JvmMetrics", "modelerType": "JvmMetrics", "tag.Context": "jvm", "tag.ProcessName": "IO", "tag.SessionId": "", "tag.Hostname": "HOSTNAME", "MemNonHeapUsedM": 192.9798, "MemNonHeapCommittedM": 198.4375, "MemNonHeapMaxM": -1.0, "MemHeapUsedM": 773.23584, "MemHeapCommittedM": 1004.0, "MemHeapMaxM": 1024.0, "MemMaxM": 1024.0, "GcCount": 2048, "GcTimeMillis": 25440, "ThreadsNew": 0, "ThreadsRunnable": 22, "ThreadsBlocked": 0, "ThreadsWaiting": 121, "ThreadsTimedWaiting": 49, "ThreadsTerminated": 0, "LogFatal": 0, "LogError": 0, "LogWarn": 0, "LogInfo": 0 }, {code} It can be observed that 2.0.x onwards the field "tag.ProcessName" is populating as "IO" instead of expected "RegionServer" or "Master". Ambari relies on this field process name to create a metric 'jvm.RegionServer.JvmMetrics.GcTimeMillis' etc. See [code.|https://github.com/apache/ambari/blob/2ec4b055d99ec84c902da16dd57df91d571b48d6/ambari-server/src/main/java/org/apache/ambari/server/controller/metrics/timeline/AMSPropertyProvider.java#L722] But post 2.0.x the field is getting populated as 'IO' and hence a metric with name 'jvm.JvmMetrics.GcTimeMillis' is created instead of expected 'jvm.RegionServer.JvmMetrics.GcTimeMillis', thus mixing up the metric with various other metrics coming from rs, master, spark executor etc. running on same host. *Expected* Field "tag.ProcessName" should be populated as "RegionServer" or "Master" instead of "IO". *Actual* Field "tag.ProcessName" is populating as "IO" instead of expected "RegionServer" or "Master" causing incorrect metric being published by ambari and thus mixing up all metrics and raising various alerts around JVM metrics. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27961) [HBCK2] Running assigns/unassigns command with large number of files/regions throws CallTimeoutException
Nihal Jain created HBASE-27961: -- Summary: [HBCK2] Running assigns/unassigns command with large number of files/regions throws CallTimeoutException Key: HBASE-27961 URL: https://issues.apache.org/jira/browse/HBASE-27961 Project: HBase Issue Type: Bug Components: hbck2 Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27811) Enable cache control for logs endpoint and set max age as 0
[ https://issues.apache.org/jira/browse/HBASE-27811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-27811. Fix Version/s: 3.0.0-alpha-4 Hadoop Flags: Reviewed Resolution: Fixed Will reopen, if back port jira is raised. > Enable cache control for logs endpoint and set max age as 0 > --- > > Key: HBASE-27811 > URL: https://issues.apache.org/jira/browse/HBASE-27811 > Project: HBase > Issue Type: Improvement >Reporter: Yash Dodeja >Assignee: Yash Dodeja >Priority: Minor > Fix For: 3.0.0-alpha-4 > > > Not setting the proper header values may cause browsers to store pages within > their respective caches. On public, shared, or any other non-private > computers, a malicious person may search through the browser cache to locate > sensitive information cached during another user's session. > /logs endpoint contains sensitive information that an attacker can exploit. > Any page with sensitive information needs to have the following headers in > response: > Cache-Control: no-cache, no-store, max-age=0 > Pragma: no-cache > Expires: -1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27815) Add support for process metrics servlet in REST InfoServer
[ https://issues.apache.org/jira/browse/HBASE-27815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-27815. Resolution: Duplicate > Add support for process metrics servlet in REST InfoServer > -- > > Key: HBASE-27815 > URL: https://issues.apache.org/jira/browse/HBASE-27815 > Project: HBase > Issue Type: Sub-task > Components: REST >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > > Unlike other HBase Master/RS Info Servers, REST Server UI does not provide a > way to get process metrics like threads, gc collectors etc. This task is add > same in HBase REST. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27815) Add support for process metrics servlet in REST InfoServer
Nihal Jain created HBASE-27815: -- Summary: Add support for process metrics servlet in REST InfoServer Key: HBASE-27815 URL: https://issues.apache.org/jira/browse/HBASE-27815 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27816) Support option to auto bind to an available port for REST Info Server
Nihal Jain created HBASE-27816: -- Summary: Support option to auto bind to an available port for REST Info Server Key: HBASE-27816 URL: https://issues.apache.org/jira/browse/HBASE-27816 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27814) Add support for dump servlet in REST InfoServer
Nihal Jain created HBASE-27814: -- Summary: Add support for dump servlet in REST InfoServer Key: HBASE-27814 URL: https://issues.apache.org/jira/browse/HBASE-27814 Project: HBase Issue Type: Sub-task Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27813) REST Info Server improvements
Nihal Jain created HBASE-27813: -- Summary: REST Info Server improvements Key: HBASE-27813 URL: https://issues.apache.org/jira/browse/HBASE-27813 Project: HBase Issue Type: Umbrella Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-20639) Implement permission checking through AccessController instead of RSGroupAdminEndpoint
[ https://issues.apache.org/jira/browse/HBASE-20639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-20639. Resolution: Duplicate > Implement permission checking through AccessController instead of > RSGroupAdminEndpoint > -- > > Key: HBASE-20639 > URL: https://issues.apache.org/jira/browse/HBASE-20639 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Nihal Jain >Priority: Major > Attachments: HBASE-20639.master.001.patch, > HBASE-20639.master.002.patch, HBASE-20639.master.002.patch > > > Currently permission checking for various RS group operations is done via > RSGroupAdminEndpoint. > e.g. in RSGroupAdminServiceImpl#moveServers() : > {code} > checkPermission("moveServers"); > groupAdminServer.moveServers(hostPorts, request.getTargetGroup()); > {code} > The practice in remaining parts of hbase is to perform permission checking > within AccessController. > Now that observer hooks for RS group operations are in right place, we should > follow best practice and move permission checking to AccessController. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27804) [HBCK2] Correct sample usage of -skip with assign in HBCK2 docs
Nihal Jain created HBASE-27804: -- Summary: [HBCK2] Correct sample usage of -skip with assign in HBCK2 docs Key: HBASE-27804 URL: https://issues.apache.org/jira/browse/HBASE-27804 Project: HBase Issue Type: Task Components: hbase-operator-tools, hbck2 Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27802) Manage static javascript resources programatically
Nihal Jain created HBASE-27802: -- Summary: Manage static javascript resources programatically Key: HBASE-27802 URL: https://issues.apache.org/jira/browse/HBASE-27802 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27792) Guard Master/RS Dump Servlet behind admin walls
Nihal Jain created HBASE-27792: -- Summary: Guard Master/RS Dump Servlet behind admin walls Key: HBASE-27792 URL: https://issues.apache.org/jira/browse/HBASE-27792 Project: HBase Issue Type: Improvement Components: security, UI Reporter: Nihal Jain Assignee: Nihal Jain Currently RSDumpServlet and MasterDumpServlet do not require any check for whether the user has privileges to access to instrumentation servlets. This is unlike other servlets like ProfileServlet, ConfServlet, JMXJsonServlet etc. which are guarded by admin checks. Goal of this JIRA is to add similar check for RS and Master Dump Servlet. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27791) Upgrade vegas and its related js libraries
Nihal Jain created HBASE-27791: -- Summary: Upgrade vegas and its related js libraries Key: HBASE-27791 URL: https://issues.apache.org/jira/browse/HBASE-27791 Project: HBase Issue Type: Task Components: UI Reporter: Nihal Jain Assignee: Nihal Jain HBase is using Vega, v5.19.1, which was released on 21 Jan, 2021 and is vulnerable to cross-site scripting (XSS), CVE IDs: * [CVE-2023-26486|https://nvd.nist.gov/vuln/detail/CVE-2023-26486] * [CVE-2023-26487|https://nvd.nist.gov/vuln/detail/CVE-2023-26487] This Jira is to upgrade to latest releases: * [https://github.com/vega/vega/releases/tag/v5.24.0] * [https://github.com/vega/vega-lite/releases/tag/v5.6.1] * [https://github.com/vega/vega-embed/releases/tag/v6.21.3] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27754) [HBCK2] generateMissingTableDescriptorFile function eats up hadoop write permission error and gives false message "Table descriptor written successfully"
Nihal Jain created HBASE-27754: -- Summary: [HBCK2] generateMissingTableDescriptorFile function eats up hadoop write permission error and gives false message "Table descriptor written successfully" Key: HBASE-27754 URL: https://issues.apache.org/jira/browse/HBASE-27754 Project: HBase Issue Type: Bug Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-27753) Bump hbase to 2.4.16 and hbase-thirdparty to 4.1.4 for hbase-operator-tools
[ https://issues.apache.org/jira/browse/HBASE-27753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-27753. Resolution: Won't Fix > Bump hbase to 2.4.16 and hbase-thirdparty to 4.1.4 for hbase-operator-tools > --- > > Key: HBASE-27753 > URL: https://issues.apache.org/jira/browse/HBASE-27753 > Project: HBase > Issue Type: Improvement > Components: hbase-operator-tools >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27753) Bump hbase to 2.4.16 and hbase-thirdparty to 4.1.4 for hbase-operator-tools
Nihal Jain created HBASE-27753: -- Summary: Bump hbase to 2.4.16 and hbase-thirdparty to 4.1.4 for hbase-operator-tools Key: HBASE-27753 URL: https://issues.apache.org/jira/browse/HBASE-27753 Project: HBase Issue Type: Improvement Components: hbase-operator-tools Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27751) Support hbase-operator-tools to compile with HBase 2.5.3
Nihal Jain created HBASE-27751: -- Summary: Support hbase-operator-tools to compile with HBase 2.5.3 Key: HBASE-27751 URL: https://issues.apache.org/jira/browse/HBASE-27751 Project: HBase Issue Type: Improvement Components: hbase-operator-tools Reporter: Nihal Jain Assignee: Nihal Jain hbase-operator-tools fails to compile against hbase 2.5.3 with following test failures. {code:java} [INFO] Running org.apache.hbase.TestMissingTableDescriptorGenerator [ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 30.149 s <<< FAILURE! - in org.apache.hbase.TestMissingTableDescriptorGenerator [ERROR] testTableinfoGeneratedWhenNoTableSpecified(org.apache.hbase.TestMissingTableDescriptorGenerator) Time elapsed: 16.734 s <<< ERROR! java.lang.IllegalArgumentException: hdfs://localhost:51882/user/nihaljain/test-data/de8af727-6c02-7a95-9beb-027d18fc6603/data/default/test-1/.tabledesc/.tableinfo.01.639 at org.apache.hbase.TestMissingTableDescriptorGenerator.testTableinfoGeneratedWhenNoTableSpecified(TestMissingTableDescriptorGenerator.java:145) [ERROR] shouldGenerateTableInfoBasedOnFileSystem(org.apache.hbase.TestMissingTableDescriptorGenerator) Time elapsed: 6.794 s <<< ERROR! java.lang.IllegalArgumentException: hdfs://localhost:51961/user/nihaljain/test-data/5ade0aa1-cb9a-a1da-b700-fe808eeda3b9/data/default/test-1/.tabledesc/.tableinfo.01.666 at org.apache.hbase.TestMissingTableDescriptorGenerator.shouldGenerateTableInfoBasedOnFileSystem(TestMissingTableDescriptorGenerator.java:120) [ERROR] shouldGenerateTableInfoBasedOnCachedTableDescriptor(org.apache.hbase.TestMissingTableDescriptorGenerator) Time elapsed: 6.621 s <<< ERROR! java.lang.IllegalArgumentException: hdfs://localhost:52022/user/nihaljain/test-data/d858258b-6ba1-8e4f-c118-4e30d8a5136f/data/default/test-1/.tabledesc/.tableinfo.01.666 at org.apache.hbase.TestMissingTableDescriptorGenerator.shouldGenerateTableInfoBasedOnCachedTableDescriptor(TestMissingTableDescriptorGenerator.java:107) {code} The goal is to allow hbase-operator-tools to compile with hbase 2.5.3 without any failures -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27724) [HBCK2] addFsRegionsMissingInMeta command should support dumping region list into a file which can be passed as input to assigns command
Nihal Jain created HBASE-27724: -- Summary: [HBCK2] addFsRegionsMissingInMeta command should support dumping region list into a file which can be passed as input to assigns command Key: HBASE-27724 URL: https://issues.apache.org/jira/browse/HBASE-27724 Project: HBase Issue Type: Improvement Components: hbase-operator-tools, hbck2 Reporter: Nihal Jain Assignee: Nihal Jain _addFsRegionsMissingInMeta_ command currently outputs a command as last line of output which needs to be run with hbck2 {code:java} assigns 22d30d9e332af3272302cf780da14c3c 43245731f82e5bb907a4433f688574c1 5a19939f4f219ab177dd5b376dcb882f 774514b1027846c4e3b6702e193ce03d 7f6ad3360e0a4811c4dace8c1a901f40 8cd363e4da1b95fd43166f451546ad63 90e3414947f9500ec01f6672103f29d0{code} This is good, but the user has to copy and format the command, which can get really big depending on how many regions need to be assigned. _addFsRegionsMissingInMeta_ should support a flag, say -f to facilitate dumping region list into a file, which can be passed onto as input to _assigns_ command via -i parameter. Sample expected use-case: {code:java} # Dump output of command (in a formatted manner) to file hbase hbck -j hbase-hbck2-version.jar addFsRegionsMissingInMeta -f regions_to_assign.txt # Pass file as input to assigns hbase hbck -j hbase-hbck2-version.jar assigns -i regions_to_assign.txt{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27680) Bump hbase to 2.4.16, hadoop to 3.1.2 and spark to 3.2.3 for hbase-connectors
Nihal Jain created HBASE-27680: -- Summary: Bump hbase to 2.4.16, hadoop to 3.1.2 and spark to 3.2.3 for hbase-connectors Key: HBASE-27680 URL: https://issues.apache.org/jira/browse/HBASE-27680 Project: HBase Issue Type: Task Components: hbase-connectors Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27639) Support hbase-connectors compilation with HBase 2.5.3, Hadoop 3.2.4 and Spark 3.2.3
Nihal Jain created HBASE-27639: -- Summary: Support hbase-connectors compilation with HBase 2.5.3, Hadoop 3.2.4 and Spark 3.2.3 Key: HBASE-27639 URL: https://issues.apache.org/jira/browse/HBASE-27639 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-27626) Suppress noisy logging in client.ConnectionImplementation
Nihal Jain created HBASE-27626: -- Summary: Suppress noisy logging in client.ConnectionImplementation Key: HBASE-27626 URL: https://issues.apache.org/jira/browse/HBASE-27626 Project: HBase Issue Type: Task Components: logging Affects Versions: 2.5.3 Reporter: Nihal Jain Assignee: Nihal Jain _client.ConnectionImplementation_ logs a lot at INFO level: {code:java} hbase:001:0> restore_snapshot 'tableSnapshot' 2023-02-09 05:15:35,538 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:35,538 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:36,599 INFO [main] client.HBaseAdmin: Taking restore-failsafe snapshot: hbase-failsafe-tableSnapshot-1675919736599 2023-02-09 05:15:36,608 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:36,608 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:36,806 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:36,806 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:37,026 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:37,026 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:37,334 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:37,334 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:37,341 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:37,342 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:37,413 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:37,413 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:37,534 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:37,535 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:37,742 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:37,742 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:38,055 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:38,056 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:38,561 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:38,561 INFO [main] client.ConnectionImplementation: Getting master state using rpc call 2023-02-09 05:15:38,568 INFO [main] client.HBaseAdmin: Operation: MODIFY, Table Name: nj:testImport, procId: 392 completed 2023-02-09 05:15:38,568 INFO [main] client.HBaseAdmin: Deleting restore-failsafe snapshot: hbase-failsafe-tableSnapshot-1675919736599 2023-02-09 05:15:38,570 INFO [main] client.ConnectionImplementation: Getting master connection state from TTL Cache 2023-02-09 05:15:38,570 INFO [main] client.ConnectionImplementation: Getting master state using rpc call Took 3.1595 seconds {code} We should log these lines in TRACE level. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-26292) Update jetty version to fix CVE-2021-34429
[ https://issues.apache.org/jira/browse/HBASE-26292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-26292. Resolution: Duplicate > Update jetty version to fix CVE-2021-34429 > -- > > Key: HBASE-26292 > URL: https://issues.apache.org/jira/browse/HBASE-26292 > Project: HBase > Issue Type: Bug > Components: dependencies, thirdparty >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Major > > CVE-2021-34429 issue is fixed in Jetty 9.4.43.v20210629 and we are using > jetty 9.4.41.v20210516. > https://github.com/apache/hbase-thirdparty/blob/c28a235236b9f63ec1d36431e5d1b6c8d4b66d90/pom.xml#L139 -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HBASE-26464) WALPrettyPrinter: JSON output is broken/malformed
Nihal Jain created HBASE-26464: -- Summary: WALPrettyPrinter: JSON output is broken/malformed Key: HBASE-26464 URL: https://issues.apache.org/jira/browse/HBASE-26464 Project: HBase Issue Type: Bug Components: tooling Reporter: Nihal Jain Assignee: Nihal Jain The JSON output of wal pretty printer is malformed and cannot be parsed. There are 2 major issue: * The writer classes and cell codec classes are added (as normal text) whenever a new WAL file printing starts (NOTE: WALPrettyPrinter can take multiple WAL files as input) * The edit heap size and position are outside json formatting These are side effects of some patches done in past which missed to cover json output scenarios. Current sample output, which is malformed JSON: {code:java} [Writer Classes: ProtobufLogWriter AsyncProtobufLogWriter Cell Codec Class: org.apache.hadoop.hbase.regionserver.wal.WALCellCodec {"sequence":"3","region":"f0153c5d890d96f06ec304eebc4aacf9","actions":[{"qualifier":"HBASE::REGION_EVENT::REGION_OPEN","vlen":176,"row":"\\x00","type":"Put","family":"METAFAMILY","timestamp":"1637170532553","total_size_sum":"288"}],"table":{"name":[104,98,97,115,101,58,110,97,109,101,115,112,97,99,101],"nameAsString":"hbase:namespace","namespace":[104,98,97,115,101],"namespaceAsString":"hbase","qualifier":[110,97,109,101,115,112,97,99,101],"qualifierAsString":"namespace","systemTable":true,"hashCode":771659354}}edit heap size: 328 position: 389 ,{"sequence":"26","region":"e42fbcf354fcf7e4e425f2d06130797c","actions":[{"qualifier":"HBASE::REGION_EVENT::REGION_OPEN","vlen":218,"row":"\\x00","type":"Put","family":"METAFAMILY","timestamp":"1637170533215","total_size_sum":"336"}],"table":{"name":[116,101,115,116],"nameAsString":"test","namespace":[100,101,102,97,117,108,116],"namespaceAsString":"default","qualifier":[116,101,115,116],"qualifierAsString":"test","systemTable":false,"hashCode":3556498}}edit heap size: 376 position: 726 ,{"sequence":"4","region":"f0153c5d890d96f06ec304eebc4aacf9","actions":[{"qualifier":"d","vlen":9,"row":"default","type":"Put","family":"info","timestamp":"1637170533278","total_size_sum":"96"}],"table":{"name":[104,98,97,115,101,58,110,97,109,101,115,112,97,99,101],"nameAsString":"hbase:namespace","namespace":[104,98,97,115,101],"namespaceAsString":"hbase","qualifier":[110,97,109,101,115,112,97,99,101],"qualifierAsString":"namespace","systemTable":true,"hashCode":771659354}}edit heap size: 136 position: 834 ,{"sequence":"5","region":"f0153c5d890d96f06ec304eebc4aacf9","actions":[{"qualifier":"d","vlen":7,"row":"hbase","type":"Put","family":"info","timestamp":"1637170533340","total_size_sum":"88"}],"table":{"name":[104,98,97,115,101,58,110,97,109,101,115,112,97,99,101],"nameAsString":"hbase:namespace","namespace":[104,98,97,115,101],"namespaceAsString":"hbase","qualifier":[110,97,109,101,115,112,97,99,101],"qualifierAsString":"namespace","systemTable":true,"hashCode":771659354}}edit heap size: 128 position: 938 ,{"sequence":"27","region":"e42fbcf354fcf7e4e425f2d06130797c","actions":[{"qualifier":"","vlen":2,"row":"r6","type":"Put","family":"cf","timestamp":"1637170714545","total_size_sum":"80"}],"table":{"name":[116,101,115,116],"nameAsString":"test","namespace":[100,101,102,97,117,108,116],"namespaceAsString":"default","qualifier":[116,101,115,116],"qualifierAsString":"test","systemTable":false,"hashCode":3556498}}edit heap size: 120 position: 1020 Writer Classes: ProtobufLogWriter AsyncProtobufLogWriter Cell Codec Class: org.apache.hadoop.hbase.regionserver.wal.WALCellCodec ,{"sequence":"3","region":"f0153c5d890d96f06ec304eebc4aacf9","actions":[{"qualifier":"HBASE::REGION_EVENT::REGION_OPEN","vlen":176,"row":"\\x00","type":"Put","family":"METAFAMILY","timestamp":"1637170532553","total_size_sum":"288"}],"table":{"name":[104,98,97,115,101,58,110,97,109,101,115,112,97,99,101],"nameAsString":"hbase:namespace","namespace":[104,98,97,115,101],"namespaceAsString":"hbase","qualifier":[110,97,109,101,115,112,97,99,101],"qualifierAsString":"namespace","systemTable":true,"hashCode":771659354}}edit heap size: 328 position: 389 ,{"sequence":"26","region":"e42fbcf354fcf7e4e425f2d06130797c","actions":[{"qualifier":"HBASE::REGION_EVENT::REGION_OPEN","vlen":218,"row":"\\x00","type":"Put","family":"METAFAMILY","timestamp":"1637170533215","total_size_sum":"336"}],"table":{"name":[116,101,115,116],"nameAsString":"test","namespace":[100,101,102,97,117,108,116],"namespaceAsString":"default","qualifier":[116,101,115,116],"qualifierAsString":"test","systemTable":false,"hashCode":3556498}}edit heap size: 376 position: 726
[jira] [Created] (HBASE-22129) Rewrite TestSpaceQuotas as parameterized tests
Nihal Jain created HBASE-22129: -- Summary: Rewrite TestSpaceQuotas as parameterized tests Key: HBASE-22129 URL: https://issues.apache.org/jira/browse/HBASE-22129 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain In {{TestSpaceQuotas}}, for a particular test scenario we have a new method for each quota type. This calls for rewriting the tests as {{Parameterized}} tests. In this Jira I plan to split {{TestSpaceQuotas}} into: * *{{SpaceQuotasTestBase}}*: Base class for tests * *{{TestSpaceQuotas}}*: No-parameterized tests * *{{TestSpaceQuotasOnTables}}*: Parameterized table space quota tests * *{{TestSpaceQuotasOnNamespaces}}*: Parameterized namespace space quota tests Mostly need to do what was done in [HBASE-20662 Patch 2|#file-9]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HBASE-20662) Increasing space quota on a violated table does not remove SpaceViolationPolicy.DISABLE enforcement
[ https://issues.apache.org/jira/browse/HBASE-20662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain reopened HBASE-20662: > Increasing space quota on a violated table does not remove > SpaceViolationPolicy.DISABLE enforcement > --- > > Key: HBASE-20662 > URL: https://issues.apache.org/jira/browse/HBASE-20662 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0, 2.0.0 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Major > Fix For: 3.0.0, 2.2.0 > > Attachments: HBASE-20662.branch-2.1.001.patch, > HBASE-20662.master.001.patch, HBASE-20662.master.002.patch, > HBASE-20662.master.003.patch, HBASE-20662.master.004.patch, > HBASE-20662.master.004.patch, HBASE-20662.master.005.patch, > HBASE-20662.master.006.patch, HBASE-20662.master.007.patch, > HBASE-20662.master.008.patch, HBASE-20662.master.008.patch, > HBASE-20662.master.009.patch, HBASE-20662.master.009.patch, > HBASE-20662.master.010.patch, screenshot.png > > > *Steps to reproduce* > * Create a table and set quota with {{SpaceViolationPolicy.DISABLE}} having > limit say 2MB > * Now put rows until space quota is violated and table gets disabled > * Next, increase space quota with limit say 4MB on the table > * Now try putting a row into the table > {code:java} > private void testSetQuotaThenViolateAndFinallyIncreaseQuota() throws > Exception { > SpaceViolationPolicy policy = SpaceViolationPolicy.DISABLE; > Put put = new Put(Bytes.toBytes("to_reject")); > put.addColumn(Bytes.toBytes(SpaceQuotaHelperForTests.F1), > Bytes.toBytes("to"), > Bytes.toBytes("reject")); > // Do puts until we violate space policy > final TableName tn = writeUntilViolationAndVerifyViolation(policy, put); > // Now, increase limit > setQuotaLimit(tn, policy, 4L); > // Put some row now: should not violate as quota limit increased > verifyNoViolation(policy, tn, put); > } > {code} > *Expected* > We should be able to put data as long as newly set quota limit is not reached > *Actual* > We fail to put any new row even after increasing limit > *Root cause* > Increasing quota on a violated table triggers the table to be enabled, but > since the table is already in violation, the system does not allow it to be > enabled (may be thinking that a user is trying to enable it) > *Relevant exception trace* > {noformat} > 2018-05-31 00:34:27,563 INFO [regionserver/root1-ThinkPad-T440p:0.Chore.1] > client.HBaseAdmin$14(844): Started enable of > testSetQuotaAndThenIncreaseQuotaWithDisable0 > 2018-05-31 00:34:27,571 DEBUG > [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=42525] > ipc.CallRunner(142): callId: 11 service: MasterService methodName: > EnableTable size: 104 connection: 127.0.0.1:38030 deadline: 1527707127568, > exception=org.apache.hadoop.hbase.security.AccessDeniedException: Enabling > the table 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to > a violated space quota. > 2018-05-31 00:34:27,571 ERROR [regionserver/root1-ThinkPad-T440p:0.Chore.1] > quotas.RegionServerSpaceQuotaManager(210): Failed to disable space violation > policy for testSetQuotaAndThenIncreaseQuotaWithDisable0. This table will > remain in violation. > org.apache.hadoop.hbase.security.AccessDeniedException: > org.apache.hadoop.hbase.security.AccessDeniedException: Enabling the table > 'testSetQuotaAndThenIncreaseQuotaWithDisable0' is disallowed due to a > violated space quota. > at org.apache.hadoop.hbase.master.HMaster$6.run(HMaster.java:2275) > at > org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131) > at org.apache.hadoop.hbase.master.HMaster.enableTable(HMaster.java:2258) > at > org.apache.hadoop.hbase.master.MasterRpcServices.enableTable(MasterRpcServices.java:725) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at >
[jira] [Resolved] (HBASE-21891) New space quota policy doesn't take effect if quota policy is changed after violation
[ https://issues.apache.org/jira/browse/HBASE-21891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-21891. Resolution: Duplicate Based on [~a00408367] [testing|https://issues.apache.org/jira/browse/HBASE-20662?focusedCommentId=16785275=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16785275], resolving this as duplicate. This issue is fixed by HBASE-20662. > New space quota policy doesn't take effect if quota policy is changed after > violation > - > > Key: HBASE-21891 > URL: https://issues.apache.org/jira/browse/HBASE-21891 > Project: HBase > Issue Type: Bug >Reporter: Ajeet Rai >Priority: Minor > > *Steps to reproduce* > 1: set_quota TYPE => SPACE, TABLE => 'test25', LIMIT => '2M', POLICY => > NO_WRITES > 2: ./hbase pe --table="test25" --nomapred --rows=300 sequentialWrite 10 > 3: Observe that after some time data usage is 3 mb and policy is in violation > 4: now try to insert some data again in the table and observe that operation > fails due to NoWritesViolationPolicyEnforcement > 5: Now change the quota policy > set_quota TYPE => SPACE, TABLE => 'test25', LIMIT => '2M', POLICY => > NO_WRITES_COMPACTIONS > 6: Now again try to insert data once new policy takes effect > 7: Observe that still operation fails but because of old policy and not new > policy. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21881) Use Forbidden API Checker to prevent future usages of forbidden api's
Nihal Jain created HBASE-21881: -- Summary: Use Forbidden API Checker to prevent future usages of forbidden api's Key: HBASE-21881 URL: https://issues.apache.org/jira/browse/HBASE-21881 Project: HBase Issue Type: Improvement Components: build Reporter: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21830) Backport HBASE-20577 (Make Log Level page design consistent with the design of other pages in UI) to branch-2
Nihal Jain created HBASE-21830: -- Summary: Backport HBASE-20577 (Make Log Level page design consistent with the design of other pages in UI) to branch-2 Key: HBASE-21830 URL: https://issues.apache.org/jira/browse/HBASE-21830 Project: HBase Issue Type: Bug Components: UI, Usability Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-12947) Replicating DDL statements like create from one cluster to another
[ https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-12947. Resolution: Duplicate > Replicating DDL statements like create from one cluster to another > --- > > Key: HBASE-12947 > URL: https://issues.apache.org/jira/browse/HBASE-12947 > Project: HBase > Issue Type: New Feature > Components: Replication >Affects Versions: 2.0.0 >Reporter: Prabhu Joseph >Priority: Critical > > Problem: > When tables are created dynamically in Hbase cluster, the Replication > feature can't be used as the new table does not exist in peer cluster. To use > the replication, we need to create same table in peer cluster also. >Having API to replicate the create table statement at peer cluster will be > more helpful in such cases. > Solution: > create 'table','cf',replication => true , peerFlag => true > if peerFlag = true, the table with the column family has to be created at > peer > cluster. >Special cases like enabling replication at peer cluster also for cyclic > replication has to be considered. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HBASE-12947) Replicating DDL statements like create from one cluster to another
[ https://issues.apache.org/jira/browse/HBASE-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain reopened HBASE-12947: > Replicating DDL statements like create from one cluster to another > --- > > Key: HBASE-12947 > URL: https://issues.apache.org/jira/browse/HBASE-12947 > Project: HBase > Issue Type: New Feature > Components: Replication >Affects Versions: 2.0.0 >Reporter: Prabhu Joseph >Priority: Critical > > Problem: > When tables are created dynamically in Hbase cluster, the Replication > feature can't be used as the new table does not exist in peer cluster. To use > the replication, we need to create same table in peer cluster also. >Having API to replicate the create table statement at peer cluster will be > more helpful in such cases. > Solution: > create 'table','cf',replication => true , peerFlag => true > if peerFlag = true, the table with the column family has to be created at > peer > cluster. >Special cases like enabling replication at peer cluster also for cyclic > replication has to be considered. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-21755) RS aborts while performing replication with wal dir on hdfs, root dir on s3
[ https://issues.apache.org/jira/browse/HBASE-21755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-21755. Resolution: Duplicate > RS aborts while performing replication with wal dir on hdfs, root dir on s3 > --- > > Key: HBASE-21755 > URL: https://issues.apache.org/jira/browse/HBASE-21755 > Project: HBase > Issue Type: Bug > Components: Filesystem Integration, Replication, wal >Affects Versions: 1.5.0, 2.1.3 >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Labels: s3 > > *Environment/Configuration* > - _hbase.wal.dir_ : Configured to be on hdfs > - _hbase.rootdir_ : Configured to be on s3 > In replication scenario, while trying to get archived log dir (using method > [WALEntryStream.java#L314|https://github.com/apache/hbase/blob/da92b3e0061a7c67aa9a3e403d68f3b56bf59370/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java#L314]) > we get the following exception: > {code:java} > 2019-01-21 17:43:55,440 ERROR > [RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2] > regionserver.ReplicationSource: Unexpected exception in > RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2 > > currentPath=hdfs://dummy_path/hbase/WALs/host2,2,1548063439555/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594 > java.lang.IllegalArgumentException: Wrong FS: > s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594, > expected: hdfs://dummy_path > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:161) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148) > 2019-01-21 17:43:55,444 ERROR > [RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2] > regionserver.HRegionServer: * ABORTING region server > host2,2,1548063439555: Unexpected exception in > RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2 > * > java.lang.IllegalArgumentException: Wrong FS: > s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594, > expected: hdfs://dummy_path > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622) > at > org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634) > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319) > at > org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404) > at >
[jira] [Created] (HBASE-21795) Client application may get stuck (time bound) if a table modify op is called immediately after split op
Nihal Jain created HBASE-21795: -- Summary: Client application may get stuck (time bound) if a table modify op is called immediately after split op Key: HBASE-21795 URL: https://issues.apache.org/jira/browse/HBASE-21795 Project: HBase Issue Type: Bug Reporter: Nihal Jain Assignee: Nihal Jain *Steps:* * Create a table * Split the table * Modify the table immediately after splitting *Expected*: The modify table procedure completes and control returns back to client *Actual:* The modify table procedure completes and control does not return back to client, until catalog janitor runs and deletes parent or future timeout occurs -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21756) Backport HBASE-21279 (Split TestAdminShell into several tests) to branch-2
Nihal Jain created HBASE-21756: -- Summary: Backport HBASE-21279 (Split TestAdminShell into several tests) to branch-2 Key: HBASE-21756 URL: https://issues.apache.org/jira/browse/HBASE-21756 Project: HBase Issue Type: Test Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21755) RS aborts while performing replication with wal dir on s3, root dir on hdfs
Nihal Jain created HBASE-21755: -- Summary: RS aborts while performing replication with wal dir on s3, root dir on hdfs Key: HBASE-21755 URL: https://issues.apache.org/jira/browse/HBASE-21755 Project: HBase Issue Type: Bug Components: Filesystem Integration, Replication Affects Versions: 2.1.3 Reporter: Nihal Jain Assignee: Nihal Jain *Environment/Configuration* - _hbase.wal.dir_ : Configured to be on s3 - _hbase.rootdir_ : Configured to be on hdfs In replication scenario, while trying to get archived log dir (using method [WALEntryStream.java#L315|https://github.com/apache/hbase/blob/b0131e19f4b9ced05f501c61596191cb8a86b660/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java#L315]) we get the following exception: {code:java} 2019-01-21 17:43:55,440 ERROR [RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2] regionserver.ReplicationSource: Unexpected exception in RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2 currentPath=hdfs://dummy_path/hbase/WALs/host2,2,1548063439555/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594 java.lang.IllegalArgumentException: Wrong FS: s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594, expected: hdfs://dummy_path at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:161) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148) 2019-01-21 17:43:55,444 ERROR [RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2] regionserver.HRegionServer: * ABORTING region server host2,2,1548063439555: Unexpected exception in RS_REFRESH_PEER-regionserver/host2:2-1.replicationSource,2.replicationSource.wal-reader.host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1,2 * java.lang.IllegalArgumentException: Wrong FS: s3a://xx/hbase128/oldWALs/host2%2C2%2C1548063439555.host2%2C2%2C1548063439555.regiongroup-1.1548063492594, expected: hdfs://dummy_path at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781) at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:246) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1622) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1619) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1634) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:465) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1742) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.getArchivedLog(WALEntryStream.java:319) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:404) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:161) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148) {code} Current code is: {code} private Path getArchivedLog(Path path) throws IOException { Path rootDir = FSUtils.getRootDir(conf); // Try found the log in old dir Path oldLogDir =
[jira] [Created] (HBASE-21749) RS UI may throw NPE and make rs-status page inaccessible with multiwal and replication
Nihal Jain created HBASE-21749: -- Summary: RS UI may throw NPE and make rs-status page inaccessible with multiwal and replication Key: HBASE-21749 URL: https://issues.apache.org/jira/browse/HBASE-21749 Project: HBase Issue Type: Bug Components: Replication, UI Reporter: Nihal Jain Assignee: Nihal Jain Sometimes RS UI is unable to open as we get a NPE; This happens because {{shipper.getCurrentPath()}} may return null. We should have a null check @ [ReplicationSource.java#L331|https://github.com/apache/hbase/blob/a2f6768acdc30b789c7cb8482b9f4352803f60a1/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L331] {code:java} Path currentPath = shipper.getCurrentPath(); try { fileSize = getFileSize(currentPath); } catch (IOException e) { LOG.warn("Ignore the exception as the file size of HLog only affects the web ui", e); fileSize = -1; }{code} !0b8e95c7-6715-42bf-88d2-b2edc9215022.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21681) Procedure V2 based enableTableReplication
Nihal Jain created HBASE-21681: -- Summary: Procedure V2 based enableTableReplication Key: HBASE-21681 URL: https://issues.apache.org/jira/browse/HBASE-21681 Project: HBase Issue Type: Improvement Components: Admin, proc-v2 Reporter: Nihal Jain Assignee: Nihal Jain We should take advantage of procedure v2 framework and reimplement/refactor {{enableTableReplication()}} API and make it more robust. Currently it would not handle failover scenarios, although it can be doing a lot of create table ops (and thus run for quite a while), given we have lots of peers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21672) Allow skipping HDFS block distribution computation
Nihal Jain created HBASE-21672: -- Summary: Allow skipping HDFS block distribution computation Key: HBASE-21672 URL: https://issues.apache.org/jira/browse/HBASE-21672 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain We should have a configuration to skip HDFS block distribution calculation in HBase. For example on file systems that do not surface locality such as S3, calculating block distribution would not be any useful, we should have a way to skip hdfs block distribution. For this, we can provide a new configuration key, say {{hbase.block.distribution.skip.computation}}, which would be {{false}} by default. Users using filesystems such as s3 may choose to make this {{true}}, thus skipping block distribution computation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21645) Perform sanity check and disallow table creation/modification with region replication < 1
Nihal Jain created HBASE-21645: -- Summary: Perform sanity check and disallow table creation/modification with region replication < 1 Key: HBASE-21645 URL: https://issues.apache.org/jira/browse/HBASE-21645 Project: HBase Issue Type: Improvement Affects Versions: 2.1.1, 3.0.0, 1.5.0, 2.1.2 Reporter: Nihal Jain Assignee: Nihal Jain We should perform sanity check and disallow table creation with region replication < 1 or modification of an existing table with new region replication value < 1. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21644) Modify table procedure runs infinitely for a table having region replication > 1
Nihal Jain created HBASE-21644: -- Summary: Modify table procedure runs infinitely for a table having region replication > 1 Key: HBASE-21644 URL: https://issues.apache.org/jira/browse/HBASE-21644 Project: HBase Issue Type: Bug Components: Admin Affects Versions: 2.1.1, 3.0.0, 2.1.2 Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21636) Enhance the shell scan command to support missing scanner specifications like ReadType, IsolationLevel etc.
Nihal Jain created HBASE-21636: -- Summary: Enhance the shell scan command to support missing scanner specifications like ReadType, IsolationLevel etc. Key: HBASE-21636 URL: https://issues.apache.org/jira/browse/HBASE-21636 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 2.0.0, 3.0.0, 2.1.2 Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21629) draining_servers.rb is broken
Nihal Jain created HBASE-21629: -- Summary: draining_servers.rb is broken Key: HBASE-21629 URL: https://issues.apache.org/jira/browse/HBASE-21629 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 2.1.1, 3.0.0, 2.1.2 Reporter: Nihal Jain Assignee: Nihal Jain 1) Handle missing methods and implementation changes in core code. * In [ZKWatcher.java|https://github.com/apache/hbase/blob/12786f80c14c6f2c3c111a55bbf431fb2e81e828/hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java#L79], variable znodePaths has now been made private from public (See HBASE-19761). Currently the script directly tries to reference znodePaths which will result in exception. * Also, joinZNode method is moved to ZNodePaths and removed from ZKUtil (See HBASE-19200). The script relies on non-existant ZKUtil.joinZNode(). 2) Close zk watcher while list draining servers: The list functionality does not close the zkw instance. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21621) Reversed scan does not return expected number of rows
Nihal Jain created HBASE-21621: -- Summary: Reversed scan does not return expected number of rows Key: HBASE-21621 URL: https://issues.apache.org/jira/browse/HBASE-21621 Project: HBase Issue Type: Bug Components: scan Affects Versions: 2.1.1, 3.0.0 Reporter: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21475) Put mutation (having TTL set) added via co-processor is retrieved even after TTL expires
Nihal Jain created HBASE-21475: -- Summary: Put mutation (having TTL set) added via co-processor is retrieved even after TTL expires Key: HBASE-21475 URL: https://issues.apache.org/jira/browse/HBASE-21475 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 2.1.1, 3.0.0 Reporter: Nihal Jain Assignee: Nihal Jain -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21404) Master/RS navbar active state does not work
Nihal Jain created HBASE-21404: -- Summary: Master/RS navbar active state does not work Key: HBASE-21404 URL: https://issues.apache.org/jira/browse/HBASE-21404 Project: HBase Issue Type: Bug Components: UI Reporter: Nihal Jain Attachments: master_after.png, master_before.png -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21297) ModifyTableProcedure can throw TNDE instead of IOE in case of REGION_REPLICATION change
Nihal Jain created HBASE-21297: -- Summary: ModifyTableProcedure can throw TNDE instead of IOE in case of REGION_REPLICATION change Key: HBASE-21297 URL: https://issues.apache.org/jira/browse/HBASE-21297 Project: HBase Issue Type: Improvement Reporter: Nihal Jain Assignee: Nihal Jain Currently {{ModifyTableProcedure}} throws an {{IOException}} (See [ModifyTableProcedure.java#L252|https://github.com/apache/hbase/blob/924d183ba0e67b975e998f6006c993f457e03c20/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.java#L252]) when a user tries to modify REGION_REPLICATION for an enabled table. Instead, it can throw a more specific {{TableNotDisabledException}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-20472) InfoServer doesnot honour any acl set by the admin
[ https://issues.apache.org/jira/browse/HBASE-20472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain resolved HBASE-20472. Resolution: Duplicate > InfoServer doesnot honour any acl set by the admin > -- > > Key: HBASE-20472 > URL: https://issues.apache.org/jira/browse/HBASE-20472 > Project: HBase > Issue Type: Bug > Components: security, UI >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Fix For: 3.0.0 > > Attachments: HBASE-20472.master.001.patch > > > The adminsAcl property can be used to restrict access to certain sections of > the web UI only to a particular set of users/groups. But in hbase, adminAcl > variable for InfoServer is always null, rendering it to not honour any acl > set by the admin. In fact I could not find any property in hbase to specify > acl list for web server. > *Analysis*: > * *InfoSever* object forgets(?) to set any *adminAcl* in the builder object > for http server. > {code:java} > public InfoServer(String name, String bindAddress, int port, boolean findPort, > final Configuration c) { > . > . > > HttpServer.Builder builder = > new org.apache.hadoop.hbase.http.HttpServer.Builder(); > . > . > this.httpServer = builder.build(); > }{code} > [See InfoServer > constructor|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java#L55] > * http server retreives a null value and sets it as adminsAcl, which is > passed to *createWebAppContext*() method > {code:java} > private HttpServer(final Builder b) throws IOException { > . > . > . > this.adminsAcl = b.adminsAcl; > this.webAppContext = createWebAppContext(b.name, b.conf, adminsAcl, > appDir); > > . > . > }{code} > [See L527 > HttpServer.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L527] > * This method next sets *ADMIN_ACL* attribute for the servlet context to > *null* > {code:java} > private static WebAppContext createWebAppContext(String name, > Configuration conf, AccessControlList adminsAcl, final String appDir) { > WebAppContext ctx = new WebAppContext(); > . > . > ctx.getServletContext().setAttribute(ADMINS_ACL, adminsAcl); > . > . > } > {code} > * Now any page having *HttpServer.hasAdministratorAccess*() will allow > access to everyone, making this check useless. > {code:java} > @Override > public void doGet(HttpServletRequest request, HttpServletResponse response > ) throws ServletException, IOException { > // Do the authorization > if (!HttpServer.hasAdministratorAccess(getServletContext(), request, > response)) { > return; > } > . > . > }{code} > [For example See L104 > LogLevel.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java#L104] > * *hasAdministratorAccess()* checks for the following and returns true, in > any case as *ADMIN_ACL* is always *null* > {code:java} > public static boolean hasAdministratorAccess( > ServletContext servletContext, HttpServletRequest request, > HttpServletResponse response) throws IOException { > . > . > if (servletContext.getAttribute(ADMINS_ACL) != null && > !userHasAdministratorAccess(servletContext, remoteUser)) { > response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "User " > + remoteUser + " is unauthorized to access this page."); >return false; > } > return true; > }{code} > [See line 1196 in > HttpServer|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L1196] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HBASE-20472) InfoServer doesnot honour any acl set by the admin
[ https://issues.apache.org/jira/browse/HBASE-20472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nihal Jain reopened HBASE-20472: > InfoServer doesnot honour any acl set by the admin > -- > > Key: HBASE-20472 > URL: https://issues.apache.org/jira/browse/HBASE-20472 > Project: HBase > Issue Type: Bug > Components: security, UI >Reporter: Nihal Jain >Assignee: Nihal Jain >Priority: Critical > Fix For: 3.0.0 > > Attachments: HBASE-20472.master.001.patch > > > The adminsAcl property can be used to restrict access to certain sections of > the web UI only to a particular set of users/groups. But in hbase, adminAcl > variable for InfoServer is always null, rendering it to not honour any acl > set by the admin. In fact I could not find any property in hbase to specify > acl list for web server. > *Analysis*: > * *InfoSever* object forgets(?) to set any *adminAcl* in the builder object > for http server. > {code:java} > public InfoServer(String name, String bindAddress, int port, boolean findPort, > final Configuration c) { > . > . > > HttpServer.Builder builder = > new org.apache.hadoop.hbase.http.HttpServer.Builder(); > . > . > this.httpServer = builder.build(); > }{code} > [See InfoServer > constructor|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/InfoServer.java#L55] > * http server retreives a null value and sets it as adminsAcl, which is > passed to *createWebAppContext*() method > {code:java} > private HttpServer(final Builder b) throws IOException { > . > . > . > this.adminsAcl = b.adminsAcl; > this.webAppContext = createWebAppContext(b.name, b.conf, adminsAcl, > appDir); > > . > . > }{code} > [See L527 > HttpServer.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L527] > * This method next sets *ADMIN_ACL* attribute for the servlet context to > *null* > {code:java} > private static WebAppContext createWebAppContext(String name, > Configuration conf, AccessControlList adminsAcl, final String appDir) { > WebAppContext ctx = new WebAppContext(); > . > . > ctx.getServletContext().setAttribute(ADMINS_ACL, adminsAcl); > . > . > } > {code} > * Now any page having *HttpServer.hasAdministratorAccess*() will allow > access to everyone, making this check useless. > {code:java} > @Override > public void doGet(HttpServletRequest request, HttpServletResponse response > ) throws ServletException, IOException { > // Do the authorization > if (!HttpServer.hasAdministratorAccess(getServletContext(), request, > response)) { > return; > } > . > . > }{code} > [For example See L104 > LogLevel.java|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java#L104] > * *hasAdministratorAccess()* checks for the following and returns true, in > any case as *ADMIN_ACL* is always *null* > {code:java} > public static boolean hasAdministratorAccess( > ServletContext servletContext, HttpServletRequest request, > HttpServletResponse response) throws IOException { > . > . > if (servletContext.getAttribute(ADMINS_ACL) != null && > !userHasAdministratorAccess(servletContext, remoteUser)) { > response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "User " > + remoteUser + " is unauthorized to access this page."); >return false; > } > return true; > }{code} > [See line 1196 in > HttpServer|https://github.com/apache/hbase/blob/46cb5dfa226892fd2580f26ce9ce77225bd7e67c/hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java#L1196] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)