[jira] [Commented] (HBASE-23568) Improve Threading of Replication
[ https://issues.apache.org/jira/browse/HBASE-23568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995430#comment-16995430 ] HBase QA commented on HBASE-23568: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 18s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 5m 3s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 22s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}237m 44s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}304m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.client.TestAsyncRegionAdminApi2 | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/934 | | JIRA Issue | HBASE-23568 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux d8f413dfe0af 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-934/out/precommit/personality/provided.sh | | git revision | master / 2d76457577 | | Default Java | 1.8.0_181 | | unit |
[GitHub] [hbase] Apache-HBase commented on issue #934: HBASE-23568: Improve Threading of Replication
Apache-HBase commented on issue #934: HBASE-23568: Improve Threading of Replication URL: https://github.com/apache/hbase/pull/934#issuecomment-565335831 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 6m 27s | master passed | | +1 :green_heart: | compile | 1m 9s | master passed | | +1 :green_heart: | checkstyle | 1m 41s | master passed | | +1 :green_heart: | shadedjars | 5m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 40s | master passed | | +0 :ok: | spotbugs | 5m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 59s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 6m 1s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | checkstyle | 1m 33s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 5m 22s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 17m 53s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | | +1 :green_heart: | findbugs | 4m 36s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 237m 44s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 304m 14s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.client.TestAsyncRegionAdminApi2 | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/934 | | JIRA Issue | HBASE-23568 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux d8f413dfe0af 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-934/out/precommit/personality/provided.sh | | git revision | master / 2d76457577 | | Default Java | 1.8.0_181 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/3/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/3/testReport/ | | Max. process+thread count | 5304 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/3/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] bharathv commented on issue #929: HBASE-23305: Remove dead code in AsyncRegistry
bharathv commented on issue #929: HBASE-23305: Remove dead code in AsyncRegistry URL: https://github.com/apache/hbase/pull/929#issuecomment-565307888 @busbey Any thoughts on my last comment? Just to be clear, I don't mind creating a new jira and updating the PR with the ID. Just want to be sure I'm doing the right thing. Let me know how to proceed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] bharathv commented on issue #904: HBASE-23304: RPCs needed for client meta information lookup
bharathv commented on issue #904: HBASE-23304: RPCs needed for client meta information lookup URL: https://github.com/apache/hbase/pull/904#issuecomment-565307437 @ndimiduk There has been some discussion over the design doc [1] but I don't see any major blocking comments from anyone. If that seems ok, can you please merge this patch? (I already addressed the test annotation issue) [1] https://docs.google.com/document/d/1JAJdM7eUxg5b417f0xWS4NztKCx1f2b6wZrudPtiXF4 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #938: HBASE-23572 In 'HBCK Report', distringush between live, dead, and unk…
Apache-HBase commented on issue #938: HBASE-23572 In 'HBCK Report', distringush between live, dead, and unk… URL: https://github.com/apache/hbase/pull/938#issuecomment-565304636 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 35s | branch-2.2 passed | | +1 :green_heart: | javadoc | 0m 36s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 10s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | javadoc | 0m 33s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 221m 57s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 236m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-938/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/938 | | Optional Tests | dupname asflicense javac javadoc unit | | uname | Linux 1aa02f310705 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-938/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 2e651fbc63 | | Default Java | 1.8.0_181 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-938/1/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-938/1/testReport/ | | Max. process+thread count | 4385 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-938/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23286) Improve MTTR: Split WAL to HFile
[ https://issues.apache.org/jira/browse/HBASE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995353#comment-16995353 ] Anoop Sam John commented on HBASE-23286: Nice numbers [~zghao].. So here per each of the 20 WAL files, we generate HFiles under region:cf. So the issue of too many tiny HFiles u mentioned in the design doc, how/whether that affects? It will generate many compaction requests? Or as a whole one compaction for all the tiny HFiles (obviously if this number< max files to compact config) > Improve MTTR: Split WAL to HFile > > > Key: HBASE-23286 > URL: https://issues.apache.org/jira/browse/HBASE-23286 > Project: HBase > Issue Type: Improvement > Components: MTTR >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > After HBASE-20724, the compaction event marker is not used anymore when > failover. So our new proposal is split WAL to HFile to imporve MTTR. It has 3 > steps: > # Read WAL and write HFile to region’s column family’s recovered.hfiles > directory. > # Open region. > # Bulkload the recovered.hfiles for every column family. > The design doc was attathed by a google doc. Any suggestions are welcomed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23286) Improve MTTR: Split WAL to HFile
[ https://issues.apache.org/jira/browse/HBASE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995352#comment-16995352 ] chenxu commented on HBASE-23286: Nice number > Improve MTTR: Split WAL to HFile > > > Key: HBASE-23286 > URL: https://issues.apache.org/jira/browse/HBASE-23286 > Project: HBase > Issue Type: Improvement > Components: MTTR >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > After HBASE-20724, the compaction event marker is not used anymore when > failover. So our new proposal is split WAL to HFile to imporve MTTR. It has 3 > steps: > # Read WAL and write HFile to region’s column family’s recovered.hfiles > directory. > # Open region. > # Bulkload the recovered.hfiles for every column family. > The design doc was attathed by a google doc. Any suggestions are welcomed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23574) TestFixKerberosTicketOrder fails intermittently
Mingliang Liu created HBASE-23574: - Summary: TestFixKerberosTicketOrder fails intermittently Key: HBASE-23574 URL: https://issues.apache.org/jira/browse/HBASE-23574 Project: HBase Issue Type: Bug Components: test Reporter: Mingliang Liu One example is at: [https://builds.apache.org/job/hadoop-multibranch/job/PR-1757/3/testReport/org.apache.hadoop.security/TestFixKerberosTicketOrder/test/] Sample stack: {code:java} org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: client from keytab /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1757/src/hadoop-common-project/hadoop-common/target/keytab javax.security.auth.login.LoginException: Invalid argument (400) - Cannot find key for type/kvno to decrypt AS REP - AES128 CTS mode with HMAC SHA1-96/1 at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1972) at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1348) at org.apache.hadoop.security.TestFixKerberosTicketOrder.test(TestFixKerberosTicketOrder.java:81) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) Caused by: javax.security.auth.login.LoginException: Invalid argument (400) - Cannot find key for type/kvno to decrypt AS REP - AES128 CTS mode with HMAC SHA1-96/1 at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.login(UserGroupInformation.java:2051) at
[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently
[ https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995347#comment-16995347 ] Mingliang Liu commented on HBASE-22607: --- [~AK2019] Could you share the command to run the test, and the whole output of error? Thanks, > TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() > fails intermittently > - > > Key: HBASE-22607 > URL: https://issues.apache.org/jira/browse/HBASE-22607 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.0.6 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9 > > Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, > HBASE-22607.002.patch, HBASE-22607.addendum.000.patch > > > In previous runs, test > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails intermittently with {{java.net.ConnectException: Connection refused}} > exception, see build > [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > > [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > and > [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/]. > So one sample exception is like: > {quote} > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) > at com.sun.proxy.$Proxy20.getListing(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580) > at > org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80) > {quote} > This seems that, somehow the rootdir filesystem is not LocalFileSystem, but > on HDFS. I have not dig deeper why this happens since it's failing > intermittently and I can not reproduce it locally. Since this is testing > export snapshot tool without cluster, we can enforce it using > LocalFileSystem; no breaking change. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23286) Improve MTTR: Split WAL to HFile
[ https://issues.apache.org/jira/browse/HBASE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995343#comment-16995343 ] Guanghao Zhang commented on HBASE-23286: Bulkload log example: {code:java} 2019-12-13,11:10:27,754 INFO [RS_OPEN_REGION-regionserver/c3-hadoop-tst-st59:31600-6] org.apache.hadoop.hbase.regionserver.HStore: Successfully loaded store file hdfs://c3tst-perf-ssd/hbase/c3tst-perf-bra nch2/data/default/ycsb-test/d28aa50b946f00a7fb11e99080964699/C/recovered.hfiles/7657104-c3-hadoop-tst-st57.bj%2C31600%2C1576205516579.1576206539192 into store C (new location: hdfs://c3tst-per f-ssd/hbase/c3tst-perf-branch2/data/default/ycsb-test/d28aa50b946f00a7fb11e99080964699/C/bbd2c5f615fe4c5692e3a12f2498aaa5) {code} There are total 806 files to be loaded for 50 regions. > Improve MTTR: Split WAL to HFile > > > Key: HBASE-23286 > URL: https://issues.apache.org/jira/browse/HBASE-23286 > Project: HBase > Issue Type: Improvement > Components: MTTR >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > After HBASE-20724, the compaction event marker is not used anymore when > failover. So our new proposal is split WAL to HFile to imporve MTTR. It has 3 > steps: > # Read WAL and write HFile to region’s column family’s recovered.hfiles > directory. > # Open region. > # Bulkload the recovered.hfiles for every column family. > The design doc was attathed by a google doc. Any suggestions are welcomed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently
[ https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995339#comment-16995339 ] AK97 edited comment on HBASE-22607 at 12/13/19 4:06 AM: I have tried the patch HBASE-22607.addendum.000.patch. However the error remains the same: at org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:110) at org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:90) The error seemd to be in TestExportSnapshotNoCluster class. Correct me if I am wrong. was (Author: ak2019): I have tried the patch HBASE-22607.addendum.000.patch. However the error remains the same. We need to investigate more on this issue. > TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() > fails intermittently > - > > Key: HBASE-22607 > URL: https://issues.apache.org/jira/browse/HBASE-22607 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.0.6 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9 > > Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, > HBASE-22607.002.patch, HBASE-22607.addendum.000.patch > > > In previous runs, test > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails intermittently with {{java.net.ConnectException: Connection refused}} > exception, see build > [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > > [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > and > [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/]. > So one sample exception is like: > {quote} > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) > at com.sun.proxy.$Proxy20.getListing(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580) > at > org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80) > {quote} > This seems that, somehow the rootdir filesystem
[jira] [Resolved] (HBASE-23566) Fix package/packet terminology problem in chaos monkeys
[ https://issues.apache.org/jira/browse/HBASE-23566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey resolved HBASE-23566. - Fix Version/s: 2.2.3 2.3.0 3.0.0 Resolution: Fixed > Fix package/packet terminology problem in chaos monkeys > --- > > Key: HBASE-23566 > URL: https://issues.apache.org/jira/browse/HBASE-23566 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0, 2.3.0, 2.2.3 > > > There is a terminology problem in some of the network issue related chaos > monkey actions. The universally understood technical term for network packet > is packet, not "package". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23566) Fix package/packet terminology problem in chaos monkeys
[ https://issues.apache.org/jira/browse/HBASE-23566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-23566: Component/s: integration tests > Fix package/packet terminology problem in chaos monkeys > --- > > Key: HBASE-23566 > URL: https://issues.apache.org/jira/browse/HBASE-23566 > Project: HBase > Issue Type: Bug > Components: integration tests >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > > There is a terminology problem in some of the network issue related chaos > monkey actions. The universally understood technical term for network packet > is packet, not "package". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently
[ https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995339#comment-16995339 ] AK97 commented on HBASE-22607: -- I have tried the patch HBASE-22607.addendum.000.patch. However the error remains the same. We need to investigate more on this issue. > TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() > fails intermittently > - > > Key: HBASE-22607 > URL: https://issues.apache.org/jira/browse/HBASE-22607 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.0.6 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9 > > Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, > HBASE-22607.002.patch, HBASE-22607.addendum.000.patch > > > In previous runs, test > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails intermittently with {{java.net.ConnectException: Connection refused}} > exception, see build > [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > > [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > and > [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/]. > So one sample exception is like: > {quote} > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) > at com.sun.proxy.$Proxy20.getListing(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580) > at > org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80) > {quote} > This seems that, somehow the rootdir filesystem is not LocalFileSystem, but > on HDFS. I have not dig deeper why this happens since it's failing > intermittently and I can not reproduce it locally. Since this is testing > export snapshot tool without cluster, we can enforce it using > LocalFileSystem; no breaking change. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23566) Fix package/packet terminology problem in chaos monkeys
[ https://issues.apache.org/jira/browse/HBASE-23566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-23566: Priority: Minor (was: Major) > Fix package/packet terminology problem in chaos monkeys > --- > > Key: HBASE-23566 > URL: https://issues.apache.org/jira/browse/HBASE-23566 > Project: HBase > Issue Type: Bug >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > > There is a terminology problem in some of the network issue related chaos > monkey actions. The universally understood technical term for network packet > is packet, not "package". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23566) Fix package/packet terminology problem in chaos monkeys
[ https://issues.apache.org/jira/browse/HBASE-23566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-23566: Issue Type: Bug (was: Improvement) > Fix package/packet terminology problem in chaos monkeys > --- > > Key: HBASE-23566 > URL: https://issues.apache.org/jira/browse/HBASE-23566 > Project: HBase > Issue Type: Bug >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Major > > There is a terminology problem in some of the network issue related chaos > monkey actions. The universally understood technical term for network packet > is packet, not "package". -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23573) can not guarantee consistency.
[ https://issues.apache.org/jira/browse/HBASE-23573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995336#comment-16995336 ] 刘本龙 commented on HBASE-23573: - 如下代码:会收集到一批syncFuture,一起处理。但是在循环中统计syncCount时,就提前notify 客户端线程,使其可以继续工作。试想一下:如果此时客户端线程收到了写入成功的响应,但是这一批次syncCount还没有处理完,memstore中的数据也没有写入到hfile,此时机器断电了,memstore中的数据丢失,本来要写入wal的日志也还没来得及调用hflush,也会丢失。那么客户端收到的notify通知提前继续执行代码了,假设该线程已经通知到hbase client,那么client和服务器的数据是不一致的,客户端认为已经成功,但是服务器并没有。 那么等这一批sysfuture都sync成功后,再统一notify写入数据的线程,可以提高一些数据可靠性。 The following code: A batch of syncFuture will be collected and processed together. However, when the syncCount is counted in the loop, the client thread is notified in advance so that it can continue to work. Imagine: If the client thread receives a successful write response at this time, but this batch of syncCount has not been processed, the data in memstore has not been written to hfile, at this time the machine is powered off, and the data in memstore Lost. The log that was originally written to wal has not been timed to call hflush, and it will be lost. Then the client received the notify notification to continue to execute the code in advance. Assuming that the thread has notified the hbase client, the data of the client and the server is inconsistent. The client considers it successful, but the server does not. Then after this batch of sysfutures have successfully synced, and then notify the thread that writes data, you can improve some data reliability. thank you. !image-2019-12-13-11-41-42-200.png|width=730,height=469! > can not guarantee consistency. > -- > > Key: HBASE-23573 > URL: https://issues.apache.org/jira/browse/HBASE-23573 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 2.1.5 >Reporter: 刘本龙 >Priority: Major > Attachments: image-2019-12-13-11-41-42-200.png > > > wal sync has not exe, but has notified thread . if RegionServer halt,such as > Power off. the client think the write was successful, but the write failed. > the data writed into memstore is loss , wal has not data too。 > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23286) Improve MTTR: Split WAL to HFile
[ https://issues.apache.org/jira/browse/HBASE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995332#comment-16995332 ] Guanghao Zhang commented on HBASE-23286: Test the patch in a small cluster which has 2 regionserver. And when the memstore size is 10G and kill one regionserver. || || Split WAL Size||Split WAL Number||Split WAL Cost||Assign Region Number||Assign Region Cost||ServerCrashProcedure Cost|| ||[hbase.wal.split.to|http://hbase.wal.split.to/].hfile=false||10.0 G||21||76121ms||50 ||min 15.335 sec max 44.688 sec || 2 mins, 1.06 sec|| ||[hbase.wal.split.to|http://hbase.wal.split.to/].hfile=true||9.5G||20||55895ms||50 ||min 1.568 sec max 4.1870 sec ||1 mins, 4.896 sec || > Improve MTTR: Split WAL to HFile > > > Key: HBASE-23286 > URL: https://issues.apache.org/jira/browse/HBASE-23286 > Project: HBase > Issue Type: Improvement > Components: MTTR >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > After HBASE-20724, the compaction event marker is not used anymore when > failover. So our new proposal is split WAL to HFile to imporve MTTR. It has 3 > steps: > # Read WAL and write HFile to region’s column family’s recovered.hfiles > directory. > # Open region. > # Bulkload the recovered.hfiles for every column family. > The design doc was attathed by a google doc. Any suggestions are welcomed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23573) can not guarantee consistency.
[ https://issues.apache.org/jira/browse/HBASE-23573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] 刘本龙 updated HBASE-23573: Attachment: image-2019-12-13-11-41-42-200.png > can not guarantee consistency. > -- > > Key: HBASE-23573 > URL: https://issues.apache.org/jira/browse/HBASE-23573 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 2.1.5 >Reporter: 刘本龙 >Priority: Major > Attachments: image-2019-12-13-11-41-42-200.png > > > wal sync has not exe, but has notified thread . if RegionServer halt,such as > Power off. the client think the write was successful, but the write failed. > the data writed into memstore is loss , wal has not data too。 > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-connectors] busbey commented on issue #46: Bump checkstyle from 8.11 to 8.18
busbey commented on issue #46: Bump checkstyle from 8.11 to 8.18 URL: https://github.com/apache/hbase-connectors/pull/46#issuecomment-565286902 Yes a jira please so we can have a release note. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23573) can not guarantee consistency.
[ https://issues.apache.org/jira/browse/HBASE-23573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995327#comment-16995327 ] Sean Busbey commented on HBASE-23573: - The client isn't supposed to be notified until after the WAL sync. Where are you seeing it do otherwise? > can not guarantee consistency. > -- > > Key: HBASE-23573 > URL: https://issues.apache.org/jira/browse/HBASE-23573 > Project: HBase > Issue Type: Bug > Components: wal >Affects Versions: 2.1.5 >Reporter: 刘本龙 >Priority: Major > > wal sync has not exe, but has notified thread . if RegionServer halt,such as > Power off. the client think the write was successful, but the write failed. > the data writed into memstore is loss , wal has not data too。 > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23557) Adopt Github PR as the accepted method of code review
[ https://issues.apache.org/jira/browse/HBASE-23557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995325#comment-16995325 ] Sean Busbey commented on HBASE-23557: - I'd like to keep the option for reviews on jira for those who don't want to make a GitHub account. > Adopt Github PR as the accepted method of code review > - > > Key: HBASE-23557 > URL: https://issues.apache.org/jira/browse/HBASE-23557 > Project: HBase > Issue Type: Task > Components: documentation, tooling >Reporter: Nick Dimiduk >Priority: Major > > Per this [discuss > thread|https://lists.apache.org/thread.html/d0f86b8380f958fb6ba79b80c774c92c9033d6cd64a099301c4f5ed4%40%3Cdev.hbase.apache.org%3E], > lets update our docs to mention Github PRs are the place where code reviews > are conducted. Let's also remove old tooling from {{dev-support}} pertaining > to review board and whatever else we find there. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23573) can not guarantee consistency.
刘本龙 created HBASE-23573: --- Summary: can not guarantee consistency. Key: HBASE-23573 URL: https://issues.apache.org/jira/browse/HBASE-23573 Project: HBase Issue Type: Bug Components: wal Affects Versions: 2.1.5 Reporter: 刘本龙 wal sync has not exe, but has notified thread . if RegionServer halt,such as Power off. the client think the write was successful, but the write failed. the data writed into memstore is loss , wal has not data too。 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] belugabehr commented on issue #912: HBASE-23380: General Cleanup of FSUtil
belugabehr commented on issue #912: HBASE-23380: General Cleanup of FSUtil URL: https://github.com/apache/hbase/pull/912#issuecomment-565280594 @busbey Should be good to go here. Thanks again for the reviews. Please consider for inclusion into the project. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23568) Improve Threading of Replication
[ https://issues.apache.org/jira/browse/HBASE-23568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995295#comment-16995295 ] HBase QA commented on HBASE-23568: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 21s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 5m 20s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 18s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 23s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 4s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}315m 0s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}381m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.TestReplicationSmallTests | | | hadoop.hbase.replication.TestReplicationKillSlaveRS | | | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory | | | hadoop.hbase.replication.TestReplicationSmallTestsSync | | | hadoop.hbase.client.TestAsyncTableAdminApi | | | hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | hadoop.hbase.client.TestFromClientSide | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/934 | | JIRA Issue | HBASE-23568 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5aff247275c6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019
[GitHub] [hbase] Apache-HBase commented on issue #934: HBASE-23568: Improve Threading of Replication
Apache-HBase commented on issue #934: HBASE-23568: Improve Threading of Replication URL: https://github.com/apache/hbase/pull/934#issuecomment-565274377 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 19s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 6m 17s | master passed | | +1 :green_heart: | compile | 1m 10s | master passed | | +1 :green_heart: | checkstyle | 1m 38s | master passed | | +1 :green_heart: | shadedjars | 5m 21s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | master passed | | +0 :ok: | spotbugs | 5m 20s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 18s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 43s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | +1 :green_heart: | javac | 1m 3s | the patch passed | | +1 :green_heart: | checkstyle | 1m 37s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 5m 23s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 18m 4s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 38s | the patch passed | | +1 :green_heart: | findbugs | 4m 52s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 315m 0s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 37s | The patch does not generate ASF License warnings. | | | | 381m 56s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.replication.TestReplicationSmallTests | | | hadoop.hbase.replication.TestReplicationKillSlaveRS | | | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory | | | hadoop.hbase.replication.TestReplicationSmallTestsSync | | | hadoop.hbase.client.TestAsyncTableAdminApi | | | hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | hadoop.hbase.client.TestFromClientSide | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/934 | | JIRA Issue | HBASE-23568 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5aff247275c6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-934/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java | 1.8.0_181 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/2/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/2/testReport/ | | Max. process+thread count | 4982 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-934/2/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23555) TestQuotaThrottle is broken
[ https://issues.apache.org/jira/browse/HBASE-23555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995288#comment-16995288 ] Hudson commented on HBASE-23555: Results for branch branch-2 [build #2383 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2383/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2383//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2383//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2383//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > TestQuotaThrottle is broken > --- > > Key: HBASE-23555 > URL: https://issues.apache.org/jira/browse/HBASE-23555 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0, 2.3.0 > > > TestQuotaThrottle is broken now. And it is anotated as Ignore because it's > flakey so the Jenkins test can not report it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on issue #936: HBASE-17115 Define UI admins via an ACL
Apache-HBase commented on issue #936: HBASE-17115 Define UI admins via an ACL URL: https://github.com/apache/hbase/pull/936#issuecomment-565262284 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 47s | master passed | | +1 :green_heart: | compile | 1m 27s | master passed | | +1 :green_heart: | checkstyle | 1m 34s | master passed | | +1 :green_heart: | shadedjars | 5m 46s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 0s | master passed | | +0 :ok: | spotbugs | 5m 18s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 58s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 6m 21s | the patch passed | | +1 :green_heart: | compile | 1m 33s | the patch passed | | +1 :green_heart: | javac | 1m 33s | the patch passed | | -1 :x: | checkstyle | 0m 15s | hbase-http: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | -1 :x: | checkstyle | 1m 36s | hbase-server: The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 5m 37s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 20m 8s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 1m 0s | the patch passed | | +1 :green_heart: | findbugs | 6m 4s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 1m 20s | hbase-http in the patch failed. | | +1 :green_heart: | unit | 161m 11s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | The patch does not generate ASF License warnings. | | | | 239m 0s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.http.TestHttpServer | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/936 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux da2cc78578f5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-936/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java | 1.8.0_181 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/1/artifact/out/diff-checkstyle-hbase-http.txt | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/1/artifact/out/diff-checkstyle-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/1/artifact/out/patch-unit-hbase-http.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/1/testReport/ | | Max. process+thread count | 4959 (vs. ulimit of 1) | | modules | C: hbase-http hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-23572) In 'HBCK Report', distinguish between live, dead, and unknown servers
[ https://issues.apache.org/jira/browse/HBASE-23572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23572: -- Summary: In 'HBCK Report', distinguish between live, dead, and unknown servers (was: In 'HBCK Report', distringush between live, dead, and unknown servers) > In 'HBCK Report', distinguish between live, dead, and unknown servers > - > > Key: HBASE-23572 > URL: https://issues.apache.org/jira/browse/HBASE-23572 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > Debugging, when viewing 'HBCK Report' sections, it helps if we know if > referenced server is online, dead, or unknown. > Add ornamentation so that when we mention a servername in 'HBCK Report', if > live, then show the server as link (to live server), if dead, show it in > italics, and if unknown, show it plain text. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23572) In 'HBCK Report', distinguish between live, dead, and unknown servers
[ https://issues.apache.org/jira/browse/HBASE-23572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23572: -- Priority: Trivial (was: Major) > In 'HBCK Report', distinguish between live, dead, and unknown servers > - > > Key: HBASE-23572 > URL: https://issues.apache.org/jira/browse/HBASE-23572 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Trivial > > Debugging, when viewing 'HBCK Report' sections, it helps if we know if > referenced server is online, dead, or unknown. > Add ornamentation so that when we mention a servername in 'HBCK Report', if > live, then show the server as link (to live server), if dead, show it in > italics, and if unknown, show it plain text. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack opened a new pull request #938: HBASE-23572 In 'HBCK Report', distringush between live, dead, and unk…
saintstack opened a new pull request #938: HBASE-23572 In 'HBCK Report', distringush between live, dead, and unk… URL: https://github.com/apache/hbase/pull/938 …nown servers Give the 'HBCK Report' an edit while we're in here filling in a bit more help text. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (HBASE-23572) In 'HBCK Report', distringush between live, dead, and unknown servers
Michael Stack created HBASE-23572: - Summary: In 'HBCK Report', distringush between live, dead, and unknown servers Key: HBASE-23572 URL: https://issues.apache.org/jira/browse/HBASE-23572 Project: HBase Issue Type: Bug Reporter: Michael Stack Debugging, when viewing 'HBCK Report' sections, it helps if we know if referenced server is online, dead, or unknown. Add ornamentation so that when we mention a servername in 'HBCK Report', if live, then show the server as link (to live server), if dead, show it in italics, and if unknown, show it plain text. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently
[ https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995257#comment-16995257 ] Mingliang Liu edited comment on HBASE-22607 at 12/13/19 1:16 AM: - [~AK2019] That is interesting. Can you reproduce this consistently? If so, the problem might be easier to debug. I can not debug here because I never see this with multiple runs. {code} git checkout rel/2.2.0 commit=$(git log master | grep -B 5 HBASE-22607 | grep commit | awk '{print $2}') git cherry-pick $commit mvn clean package mvn test -Dtest=TestExportSnapshotNoCluster {code} So I check the line number and it is not very clear which line error out in {{testSnapshotWithRefsExportFileSystemState}}. I guess it's in LoC 216 of {{TestExportSnapshot}}. {code:title=TestExportSnapshot.java:216} copyDir = copyDir.makeQualified(fs); {code} If so, the {{fs}} is created using a new Configuration which is NOT patched as in {{TestExportSnapshotNoCluster}}. Could you try the addendum diff [^HBASE-22607.addendum.000.patch] ? Hopefully it will fix this. Otherwise we may have to debug further, which perhaps is orthogonal to this patch. was (Author: liuml07): [~AK2019] That is interesting. Can you reproduce this consistently? If so, the problem might be easier to debug. I can not debug here because I never see this with multiple runs. {code} git checkout rel/2.2.0 commit=$(git log master | grep -B 5 HBASE-22607 | grep commit | awk '{print $2}') git cherry-pick $commit mvn clean package mvn test -Dtest=TestExportSnapshotNoCluster {code} So I check the line number and it is not very clear which line error out in {{testSnapshotWithRefsExportFileSystemState(}}. I guess it's in LoC 216 of {{TestExportSnapshot}}. If so, the fs is created using new Configuration which is patched as in {{TestExportSnapshotNoCluster}}. {code:title=TestExportSnapshot.java:216} copyDir = copyDir.makeQualified(fs); {code} Could you try the addendum diff [^HBASE-22607.addendum.000.patch] ? Hopefully it will fix this. Otherwise we may have to debug further, which perhaps is orthogonal to this patch. > TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() > fails intermittently > - > > Key: HBASE-22607 > URL: https://issues.apache.org/jira/browse/HBASE-22607 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.0.6 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9 > > Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, > HBASE-22607.002.patch, HBASE-22607.addendum.000.patch > > > In previous runs, test > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails intermittently with {{java.net.ConnectException: Connection refused}} > exception, see build > [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > > [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > and > [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/]. > So one sample exception is like: > {quote} > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) > at com.sun.proxy.$Proxy20.getListing(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) > at
[jira] [Commented] (HBASE-23380) General Cleanup of FSUtil
[ https://issues.apache.org/jira/browse/HBASE-23380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995258#comment-16995258 ] HBase QA commented on HBASE-23380: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 1s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 40s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 5m 22s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} hbase-server: The patch generated 0 new + 66 unchanged - 8 fixed = 66 total (was 74) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 16m 43s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}157m 30s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}219m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/912 | | JIRA Issue | HBASE-23380 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4dd2cd85eed6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-912/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java |
[GitHub] [hbase] Apache-HBase commented on issue #912: HBASE-23380: General Cleanup of FSUtil
Apache-HBase commented on issue #912: HBASE-23380: General Cleanup of FSUtil URL: https://github.com/apache/hbase/pull/912#issuecomment-565258845 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 26s | master passed | | +1 :green_heart: | compile | 0m 56s | master passed | | +1 :green_heart: | checkstyle | 1m 20s | master passed | | +1 :green_heart: | shadedjars | 4m 40s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | master passed | | +0 :ok: | spotbugs | 5m 22s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 19s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 25s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | checkstyle | 1m 26s | hbase-server: The patch generated 0 new + 66 unchanged - 8 fixed = 66 total (was 74) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 41s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 16m 43s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | | +1 :green_heart: | findbugs | 4m 25s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 157m 30s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | The patch does not generate ASF License warnings. | | | | 219m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/912 | | JIRA Issue | HBASE-23380 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4dd2cd85eed6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-912/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/5/testReport/ | | Max. process+thread count | 5139 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/5/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently
[ https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995257#comment-16995257 ] Mingliang Liu commented on HBASE-22607: --- [~AK2019] That is interesting. Can you reproduce this consistently? If so, the problem might be easier to debug. I can not debug here because I never see this with multiple runs. {code} git checkout rel/2.2.0 commit=$(git log master | grep -B 5 HBASE-22607 | grep commit | awk '{print $2}') git cherry-pick $commit mvn clean package mvn test -Dtest=TestExportSnapshotNoCluster {code} So I check the line number and it is not very clear which line error out in {{testSnapshotWithRefsExportFileSystemState(}}. I guess it's in LoC 216 of {{TestExportSnapshot}}. If so, the fs is created using new Configuration which is patched as in {{TestExportSnapshotNoCluster}}. {code:title=TestExportSnapshot.java:216} copyDir = copyDir.makeQualified(fs); {code} Could you try the addendum diff [^HBASE-22607.addendum.000.patch] ? Hopefully it will fix this. Otherwise we may have to debug further, which perhaps is orthogonal to this patch. > TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() > fails intermittently > - > > Key: HBASE-22607 > URL: https://issues.apache.org/jira/browse/HBASE-22607 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.0.6 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9 > > Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, > HBASE-22607.002.patch, HBASE-22607.addendum.000.patch > > > In previous runs, test > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails intermittently with {{java.net.ConnectException: Connection refused}} > exception, see build > [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > > [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > and > [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/]. > So one sample exception is like: > {quote} > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) > at com.sun.proxy.$Proxy20.getListing(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580) > at > org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647) > at >
[jira] [Commented] (HBASE-22749) Distributed MOB compactions
[ https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995255#comment-16995255 ] HBase QA commented on HBASE-22749: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} https://github.com/apache/hbase/pull/921 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | GITHUB PR | https://github.com/apache/hbase/pull/921 | | JIRA Issue | HBASE-22749 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-921/2/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. > Distributed MOB compactions > > > Key: HBASE-22749 > URL: https://issues.apache.org/jira/browse/HBASE-22749 > Project: HBase > Issue Type: New Feature > Components: mob >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Major > Attachments: HBASE-22749-branch-2.2-v4.patch, > HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, > HBASE-22749-master-v3.patch, HBASE-22749-master-v4.patch, > HBase-MOB-2.0-v3.0.pdf > > > There are several drawbacks in the original MOB 1.0 (Moderate Object > Storage) implementation, which can limit the adoption of the MOB feature: > # MOB compactions are executed in a Master as a chore, which limits > scalability because all I/O goes through a single HBase Master server. > # Yarn/Mapreduce framework is required to run MOB compactions in a scalable > way, but this won’t work in a stand-alone HBase cluster. > # Two separate compactors for MOB and for regular store files and their > interactions can result in a data loss (see HBASE-22075) > The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible > implementation, which is free of the above drawbacks and can be used as a > drop in replacement in existing MOB deployments. So, these are design goals > of a MOB 2.0: > # Make MOB compactions scalable without relying on Yarn/Mapreduce framework > # Provide unified compactor for both MOB and regular store files > # Make it more robust especially w.r.t. to data losses. > # Simplify and reduce the overall MOB code. > # Provide 100% compatible implementation with MOB 1.0. > # No migration of data should be required between MOB 1.0 and MOB 2.0 - just > software upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-22607) TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() fails intermittently
[ https://issues.apache.org/jira/browse/HBASE-22607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HBASE-22607: -- Attachment: HBASE-22607.addendum.000.patch > TestExportSnapshotNoCluster::testSnapshotWithRefsExportFileSystemState() > fails intermittently > - > > Key: HBASE-22607 > URL: https://issues.apache.org/jira/browse/HBASE-22607 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.2.0, 2.0.6 >Reporter: Mingliang Liu >Assignee: Mingliang Liu >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9 > > Attachments: HBASE-22607.000.patch, HBASE-22607.001.patch, > HBASE-22607.002.patch, HBASE-22607.addendum.000.patch > > > In previous runs, test > {{TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState}} > fails intermittently with {{java.net.ConnectException: Connection refused}} > exception, see build > [510|https://builds.apache.org/job/PreCommit-HBASE-Build/510/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > > [545|https://builds.apache.org/job/PreCommit-HBASE-Build/545/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/], > and > [556|https://builds.apache.org/job/PreCommit-HBASE-Build/556/testReport/org.apache.hadoop.hbase.snapshot/TestExportSnapshotNoCluster/testSnapshotWithRefsExportFileSystemState/]. > So one sample exception is like: > {quote} > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) > at com.sun.proxy.$Proxy20.getListing(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1630) > at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1614) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:900) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:114) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:964) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:961) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580) > at > org.apache.hadoop.hbase.util.CommonFSUtils.listStatus(CommonFSUtils.java:693) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getCurrentTableInfoStatus(FSTableDescriptors.java:448) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:429) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:410) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.createTableDescriptorForTableDirectory(FSTableDescriptors.java:763) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createTable(SnapshotTestingUtils.java:675) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:653) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshot(SnapshotTestingUtils.java:647) > at > org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils$SnapshotMock.createSnapshotV2(SnapshotTestingUtils.java:637) > at > org.apache.hadoop.hbase.snapshot.TestExportSnapshotNoCluster.testSnapshotWithRefsExportFileSystemState(TestExportSnapshotNoCluster.java:80) > {quote} > This seems that, somehow the rootdir filesystem is not LocalFileSystem, but > on HDFS. I have not dig deeper why this happens since it's failing > intermittently and I can not reproduce it locally. Since this is testing > export snapshot tool without cluster, we can enforce it using > LocalFileSystem; no breaking change. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on issue #921: HBASE-22749: Distributed MOB compactions
Apache-HBase commented on issue #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#issuecomment-565256674 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 7s | https://github.com/apache/hbase/pull/921 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/921 | | JIRA Issue | HBASE-22749 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-921/2/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #935: HBASE-23066 Allow cache on write during compactions when prefetching …
Apache-HBase commented on issue #935: HBASE-23066 Allow cache on write during compactions when prefetching … URL: https://github.com/apache/hbase/pull/935#issuecomment-565254667 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 18s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 6m 11s | master passed | | +1 :green_heart: | compile | 1m 3s | master passed | | +1 :green_heart: | checkstyle | 1m 34s | master passed | | +1 :green_heart: | shadedjars | 5m 11s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | master passed | | +0 :ok: | spotbugs | 5m 16s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 14s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 7m 44s | the patch passed | | +1 :green_heart: | compile | 1m 23s | the patch passed | | +1 :green_heart: | javac | 1m 23s | the patch passed | | -1 :x: | checkstyle | 1m 45s | hbase-server: The patch generated 3 new + 46 unchanged - 0 fixed = 49 total (was 46) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 6m 27s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 23m 28s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 47s | the patch passed | | +1 :green_heart: | findbugs | 5m 40s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 313m 55s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 391m 34s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.replication.TestReplicationKillSlaveRS | | | hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs | | | hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated | | | hadoop.hbase.replication.TestReplicationSmallTests | | | hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | hadoop.hbase.master.procedure.TestSCPWithReplicas | | | hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-935/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/935 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 7d5379eccb6d 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 10:36:11 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-935/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java | 1.8.0_181 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-935/1/artifact/out/diff-checkstyle-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-935/1/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-935/1/testReport/ | | Max. process+thread count | 4966 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-935/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (HBASE-23570) Point users to the async-profiler home page if diagrams are coming up blank
[ https://issues.apache.org/jira/browse/HBASE-23570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-23570. --- Fix Version/s: 3.0.0 Resolution: Fixed Merged these one-liners to master. > Point users to the async-profiler home page if diagrams are coming up blank > --- > > Key: HBASE-23570 > URL: https://issues.apache.org/jira/browse/HBASE-23570 > Project: HBase > Issue Type: Bug > Components: profiler >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Trivial > Fix For: 3.0.0 > > > Add minor note on servlet and to doc pointing folks to async-profiler home > page if diagrams are coming up blank -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #937: HBASE-23570 Point users to the async-profiler home page if diagrams are coming up blank
saintstack merged pull request #937: HBASE-23570 Point users to the async-profiler home page if diagrams are coming up blank URL: https://github.com/apache/hbase/pull/937 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack opened a new pull request #937: HBASE-23570 Point users to the async-profiler home page if diagrams are coming up blank
saintstack opened a new pull request #937: HBASE-23570 Point users to the async-profiler home page if diagrams are coming up blank URL: https://github.com/apache/hbase/pull/937 …re coming up blank This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23381) Improve Logging in HBase Commons Package
[ https://issues.apache.org/jira/browse/HBASE-23381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995233#comment-16995233 ] HBase QA commented on HBASE-23381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 2s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 51s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s{color} | {color:red} hbase-common: The patch generated 9 new + 145 unchanged - 21 fixed = 154 total (was 166) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 1s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 2s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 58s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}155m 30s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/913 | | JIRA Issue | HBASE-23381 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux dcf0be332d35 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[GitHub] [hbase] Apache-HBase commented on issue #913: HBASE-23381: Improve Logging in HBase Commons Package
Apache-HBase commented on issue #913: HBASE-23381: Improve Logging in HBase Commons Package URL: https://github.com/apache/hbase/pull/913#issuecomment-565247698 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 19s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 59s | master passed | | +1 :green_heart: | compile | 0m 22s | master passed | | +1 :green_heart: | checkstyle | 0m 31s | master passed | | +1 :green_heart: | shadedjars | 5m 2s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | master passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 48s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 32s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | -1 :x: | checkstyle | 0m 28s | hbase-common: The patch generated 9 new + 145 unchanged - 21 fixed = 154 total (was 166) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 5m 1s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 17m 2s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | +1 :green_heart: | findbugs | 0m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 58s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 155m 30s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 209m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/913 | | JIRA Issue | HBASE-23381 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux dcf0be332d35 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-913/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java | 1.8.0_181 | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/5/artifact/out/diff-checkstyle-hbase-common.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/5/testReport/ | | Max. process+thread count | 4696 (vs. ulimit of 1) | | modules | C: hbase-common U: hbase-common | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/5/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase-operator-tools] asf-ci commented on issue #47: HBASE-23180 hbck2 testing tool
asf-ci commented on issue #47: HBASE-23180 hbck2 testing tool URL: https://github.com/apache/hbase-operator-tools/pull/47#issuecomment-565247383 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/PreCommit-HBASE-OPERATOR-TOOLS-Build/125/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase-operator-tools] jatsakthi opened a new pull request #47: HBASE-23180 hbck2 testing tool
jatsakthi opened a new pull request #47: HBASE-23180 hbck2 testing tool URL: https://github.com/apache/hbase-operator-tools/pull/47 This adds a new tool that spins up a hbase on hadoop minicluster and mimicks actions of hbck2 to verify it's functionalities This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23180) Create a nightly build to verify hbck2
[ https://issues.apache.org/jira/browse/HBASE-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995224#comment-16995224 ] Sakthi commented on HBASE-23180: Also I would want to shift the focus of this Jira towards a standalone tool to start with rather than a nightly job/Jenkins. I can add follow on for that, once this goes in. > Create a nightly build to verify hbck2 > -- > > Key: HBASE-23180 > URL: https://issues.apache.org/jira/browse/HBASE-23180 > Project: HBase > Issue Type: Task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Labels: hbck2 > Fix For: hbase-operator-tools-1.1.0 > > > Quoting myself from the discussion thread from the dev mailing list "*How do > we test hbck2?*" - > "Planning to start working on a nightly build that can spin up a > mini-cluster, load some data into it, do some actions to bring the cluster > into an undesirable state that hbck2 can fix and then invoke the hbck2 to see > if things work well. > > Plan is to start small with one of the hbck2 commands and remaining ones can > be added incrementally. As of now I would like to start with making sure the > job uses one of the hbase versions (probably 2.1.x/2.2.x), we can discuss > about the need to run the job against all the present hbase versions/taking > in a bunch of hbase versions as input and running against them/or just a > single version. > > The job script would be located in our operator-tools repo." -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23180) Create a nightly build to verify hbck2
[ https://issues.apache.org/jira/browse/HBASE-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995223#comment-16995223 ] Sakthi commented on HBASE-23180: Update: have been able to build a standalone tool that spins up a hbase on hadoop minicluster and tries to mimick few of hbck2's functionalities/usages. PR would be up soon. > Create a nightly build to verify hbck2 > -- > > Key: HBASE-23180 > URL: https://issues.apache.org/jira/browse/HBASE-23180 > Project: HBase > Issue Type: Task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Labels: hbck2 > Fix For: hbase-operator-tools-1.1.0 > > > Quoting myself from the discussion thread from the dev mailing list "*How do > we test hbck2?*" - > "Planning to start working on a nightly build that can spin up a > mini-cluster, load some data into it, do some actions to bring the cluster > into an undesirable state that hbck2 can fix and then invoke the hbck2 to see > if things work well. > > Plan is to start small with one of the hbck2 commands and remaining ones can > be added incrementally. As of now I would like to start with making sure the > job uses one of the hbase versions (probably 2.1.x/2.2.x), we can discuss > about the need to run the job against all the present hbase versions/taking > in a bunch of hbase versions as input and running against them/or just a > single version. > > The job script would be located in our operator-tools repo." -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23571) Handle CompactType.MOB correctly
Vladimir Rodionov created HBASE-23571: - Summary: Handle CompactType.MOB correctly Key: HBASE-23571 URL: https://issues.apache.org/jira/browse/HBASE-23571 Project: HBase Issue Type: Sub-task Reporter: Vladimir Rodionov Assignee: Vladimir Rodionov Client facing feature, should be supported or at least properly handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357425128 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357310437 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357311102 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357424569 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357298355 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); Review comment: include a note that the checking of TTL expiration is done during `hbase.master.mob.cleaner.period` or use `Configuration#addDeprecation` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357277801 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired Review comment: nit: javadoc is out of date This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357418708 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357419622 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357416169 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357273228 ## File path: hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestMobCompaction.java ## @@ -0,0 +1,413 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Set; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.master.MobFileCleanerChore; +import org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner; +import org.apache.hadoop.hbase.mob.FaultyMobStoreCompactor; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobStoreEngine; +import org.apache.hadoop.hbase.mob.MobUtils; + +import org.apache.hadoop.hbase.testclassification.IntegrationTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.util.ToolRunner; +import org.apache.hbase.thirdparty.com.google.common.base.MoreObjects; +import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +/** + * An integration test to detect regressions in HBASE-22749. Test creates + * MOB-enabled table, and runs in parallel, the following tasks: loads data, + * runs MOB compactions, runs MOB cleaning chore. The failure injections into MOB + * compaction cycle is implemented via specific sub-class of DefaultMobStoreCompactor - + * FaultyMobStoreCompactor. The probability of failure is controlled by command-line + * argument 'failprob'. + * @see https://issues.apache.org/jira/browse/HBASE-22749;>HBASE-22749 + */ Review comment: include a brief example of running this test against an existing cluster. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357424350 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357422231 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357422935 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357277719 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; Review comment: nit: if you put this under the package `org.apache.hadoop.hbase.mob` it'll make it easier to change related logging levels at the same time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357418621 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357424992 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357420223 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357423849 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357426706 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357343006 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357419132 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357272912 ## File path: hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestMobCompaction.java ## @@ -0,0 +1,413 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Set; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.master.MobFileCleanerChore; +import org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner; +import org.apache.hadoop.hbase.mob.FaultyMobStoreCompactor; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobStoreEngine; +import org.apache.hadoop.hbase.mob.MobUtils; + +import org.apache.hadoop.hbase.testclassification.IntegrationTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.util.ToolRunner; +import org.apache.hbase.thirdparty.com.google.common.base.MoreObjects; +import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +/** + * An integration test to detect regressions in HBASE-22749. Test creates + * MOB-enabled table, and runs in parallel, the following tasks: loads data, + * runs MOB compactions, runs MOB cleaning chore. The failure injections into MOB + * compaction cycle is implemented via specific sub-class of DefaultMobStoreCompactor - + * FaultyMobStoreCompactor. The probability of failure is controlled by command-line + * argument 'failprob'. + * @see https://issues.apache.org/jira/browse/HBASE-22749;>HBASE-22749 + */ +@SuppressWarnings("deprecation") + +@Category(IntegrationTests.class) +public class IntegrationTestMobCompaction extends IntegrationTestBase { + protected static final Logger LOG = LoggerFactory.getLogger(IntegrationTestMobCompaction.class); + + protected static final String REGIONSERVER_COUNT_KEY = "servers"; + protected static final String ROWS_COUNT_KEY = "rows"; + protected static final String FAILURE_PROB_KEY = "failprob"; + + protected static final int DEFAULT_REGIONSERVER_COUNT = 3; + protected static final int DEFAULT_ROWS_COUNT = 500; + protected static final double DEFAULT_FAILURE_PROB = 0.1; + + protected static int regionServerCount = DEFAULT_REGIONSERVER_COUNT; + protected static long rowsToLoad = DEFAULT_ROWS_COUNT; + protected static double failureProb = DEFAULT_FAILURE_PROB; + + protected static String famStr = "f1"; + protected static byte[] fam = Bytes.toBytes(famStr); + protected static byte[] qualifier = Bytes.toBytes("q1"); + protected static long mobLen = 10; + protected static byte[] mobVal = Bytes + .toBytes("01234567890123456789012345678901234567890123456789012345678901234567890123456789"); + + private static Configuration conf; + private static HTableDescriptor hdt; + private static HColumnDescriptor hcd; + private static Admin admin; + private static Table table = null; + private static MobFileCleanerChore chore; + + private static volatile boolean run = true; + + @Override + @Before + public void setUp() throws Exception { +util = getTestingUtil(getConf()); +conf = util.getConfiguration(); +// Initialize with test-specific configuration values +initConf(conf); +regionServerCount = +conf.getInt(REGIONSERVER_COUNT_KEY, DEFAULT_REGIONSERVER_COUNT); +LOG.info("Initializing cluster with {} region servers.", regionServerCount); +util.initializeCluster(regionServerCount); +admin
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357424481 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357271736 ## File path: hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestMobCompaction.java ## @@ -0,0 +1,413 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Set; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.master.MobFileCleanerChore; +import org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner; +import org.apache.hadoop.hbase.mob.FaultyMobStoreCompactor; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobStoreEngine; +import org.apache.hadoop.hbase.mob.MobUtils; + +import org.apache.hadoop.hbase.testclassification.IntegrationTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.util.ToolRunner; +import org.apache.hbase.thirdparty.com.google.common.base.MoreObjects; +import org.apache.hbase.thirdparty.org.apache.commons.cli.CommandLine; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + + +/** + * An integration test to detect regressions in HBASE-22749. Test creates + * MOB-enabled table, and runs in parallel, the following tasks: loads data, + * runs MOB compactions, runs MOB cleaning chore. The failure injections into MOB + * compaction cycle is implemented via specific sub-class of DefaultMobStoreCompactor - + * FaultyMobStoreCompactor. The probability of failure is controlled by command-line + * argument 'failprob'. + * @see https://issues.apache.org/jira/browse/HBASE-22749;>HBASE-22749 + */ +@SuppressWarnings("deprecation") + +@Category(IntegrationTests.class) +public class IntegrationTestMobCompaction extends IntegrationTestBase { + protected static final Logger LOG = LoggerFactory.getLogger(IntegrationTestMobCompaction.class); + + protected static final String REGIONSERVER_COUNT_KEY = "servers"; + protected static final String ROWS_COUNT_KEY = "rows"; + protected static final String FAILURE_PROB_KEY = "failprob"; + + protected static final int DEFAULT_REGIONSERVER_COUNT = 3; + protected static final int DEFAULT_ROWS_COUNT = 500; + protected static final double DEFAULT_FAILURE_PROB = 0.1; + + protected static int regionServerCount = DEFAULT_REGIONSERVER_COUNT; + protected static long rowsToLoad = DEFAULT_ROWS_COUNT; + protected static double failureProb = DEFAULT_FAILURE_PROB; + + protected static String famStr = "f1"; + protected static byte[] fam = Bytes.toBytes(famStr); + protected static byte[] qualifier = Bytes.toBytes("q1"); + protected static long mobLen = 10; + protected static byte[] mobVal = Bytes + .toBytes("01234567890123456789012345678901234567890123456789012345678901234567890123456789"); + + private static Configuration conf; + private static HTableDescriptor hdt; + private static HColumnDescriptor hcd; + private static Admin admin; + private static Table table = null; + private static MobFileCleanerChore chore; + + private static volatile boolean run = true; + + @Override + @Before + public void setUp() throws Exception { +util = getTestingUtil(getConf()); +conf = util.getConfiguration(); +// Initialize with test-specific configuration values +initConf(conf); +regionServerCount = +conf.getInt(REGIONSERVER_COUNT_KEY, DEFAULT_REGIONSERVER_COUNT); +LOG.info("Initializing cluster with {} region servers.", regionServerCount); +util.initializeCluster(regionServerCount); +admin
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357310388 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357424392 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357316024 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357275799 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java ## @@ -1749,10 +1747,13 @@ public CompactRegionResponse compactRegion(final RpcController controller, master.checkInitialized(); byte[] regionName = request.getRegion().getValue().toByteArray(); TableName tableName = RegionInfo.getTable(regionName); + // TODO: support CompactType.MOB // if the region is a mob region, do the mob file compaction. if (MobUtils.isMobRegionName(tableName, regionName)) { checkHFileFormatVersionForMob(); -return compactMob(request, tableName); +//TODO: support CompactType.MOB Review comment: we should reference a specific JIRA if we're putting a TODO in here. Given this feature, what would CompactType.MOB mean? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357306925 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357273869 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1309,14 +1308,18 @@ public void updateConfigurationForQuotasObserver(Configuration conf) { } private void initMobCleaner() { -this.expiredMobFileCleanerChore = new ExpiredMobFileCleanerChore(this); -getChoreService().scheduleChore(expiredMobFileCleanerChore); +this.mobFileCleanerChore = new MobFileCleanerChore(this); +getChoreService().scheduleChore(mobFileCleanerChore); int mobCompactionPeriod = conf.getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD, -MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD); -this.mobCompactChore = new MobCompactionChore(this, mobCompactionPeriod); -getChoreService().scheduleChore(mobCompactChore); -this.mobCompactThread = new MasterMobCompactionThread(this); + MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD); + +if (mobCompactionPeriod > 0) { Review comment: leave this out. the chore system already takes care of logging a nice message if the chore period is <= 0 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/921#discussion_r357316286 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java ## @@ -0,0 +1,289 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.master; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.hbase.ScheduledChore; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner; +import org.apache.hadoop.hbase.mob.MobConstants; +import org.apache.hadoop.hbase.mob.MobUtils; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.util.HFileArchiveUtil; +import org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The Class ExpiredMobFileCleanerChore for running cleaner regularly to remove the expired + * and obsolete (files which have no active references to) mob files. + */ +@InterfaceAudience.Private +public class MobFileCleanerChore extends ScheduledChore { + + private static final Logger LOG = LoggerFactory.getLogger(MobFileCleanerChore.class); + private final HMaster master; + private ExpiredMobFileCleaner cleaner; + private long minAgeToArchive; + + public MobFileCleanerChore(HMaster master) { +super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, master.getConfiguration() + .getInt(MobConstants.MOB_CLEANER_PERIOD, MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master + .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD, +MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS); +this.master = master; +cleaner = new ExpiredMobFileCleaner(); +cleaner.setConf(master.getConfiguration()); +checkObsoleteConfigurations(); + } + + private void checkObsoleteConfigurations() { +Configuration conf = master.getConfiguration(); +if (conf.get("hbase.master.mob.ttl.cleaner.period") != null) { + LOG.warn("'hbase.master.mob.ttl.cleaner.period' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) { + LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.delfile.max.count") != null) { + LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.threads.max") != null) { + LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used anymore."); +} +if (conf.get("hbase.mob.compaction.batch.size") != null) { + LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used anymore."); +} + } + + @VisibleForTesting + public MobFileCleanerChore() { +this.master = null; + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION", +justification="Intentional") + + protected void chore() { +
[jira] [Created] (HBASE-23570) Point users to the async-profiler home page if diagrams are coming up blank
Michael Stack created HBASE-23570: - Summary: Point users to the async-profiler home page if diagrams are coming up blank Key: HBASE-23570 URL: https://issues.apache.org/jira/browse/HBASE-23570 Project: HBase Issue Type: Bug Components: profiler Reporter: Michael Stack Assignee: Michael Stack Add minor note on servlet and to doc pointing folks to async-profiler home page if diagrams are coming up blank -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] busbey merged pull request #933: HBASE-23566: Fix package/packet terminology problem in chaos monkeys
busbey merged pull request #933: HBASE-23566: Fix package/packet terminology problem in chaos monkeys URL: https://github.com/apache/hbase/pull/933 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #936: HBASE-17115 Define UI admins via an ACL
busbey commented on a change in pull request #936: HBASE-17115 Define UI admins via an ACL URL: https://github.com/apache/hbase/pull/936#discussion_r357387438 ## File path: hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java ## @@ -324,6 +324,14 @@ public void doGet(HttpServletRequest request, HttpServletResponse response) response)) { return; } + // Disallow modification of the LogLevel if explicitly set to readonly Review comment: Great catch! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #936: HBASE-17115 Define UI admins via an ACL
busbey commented on a change in pull request #936: HBASE-17115 Define UI admins via an ACL URL: https://github.com/apache/hbase/pull/936#discussion_r357390928 ## File path: hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java ## @@ -770,30 +777,28 @@ public void addJerseyResourcePackage(final String packageName, } /** - * Add a servlet in the server. + * Adds a servlet in the server that any user can access. * @param name The name of the servlet (can be passed as null) * @param pathSpec The path spec for the servlet * @param clazz The servlet class */ - public void addServlet(String name, String pathSpec, + public void addUnprivilegedServlet(String name, String pathSpec, Review comment: Can we add some brief note here and `addPrivilegedServlet` about how we should be picking wether a given servlet needs to be restricted? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on a change in pull request #936: HBASE-17115 Define UI admins via an ACL
busbey commented on a change in pull request #936: HBASE-17115 Define UI admins via an ACL URL: https://github.com/apache/hbase/pull/936#discussion_r357390022 ## File path: hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java ## @@ -131,6 +131,10 @@ "signature.secret.file"; public static final String HTTP_AUTHENTICATION_SIGNATURE_SECRET_FILE_KEY = HTTP_AUTHENTICATION_PREFIX + HTTP_AUTHENTICATION_SIGNATURE_SECRET_FILE_SUFFIX; + public static final String HTTP_SPNEGO_AUTHENTICATION_ADMIN_USERS_KEY = + HTTP_SPNEGO_AUTHENTICATION_PREFIX + "admin.users"; Review comment: Should have a note in the docs about setting things secure for configuring these? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client
[ https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995136#comment-16995136 ] Hudson commented on HBASE-18095: Results for branch HBASE-18095/client-locate-meta-no-zookeeper [build #7 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/7/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/7//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/7//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/7//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Provide an option for clients to find the server hosting META that does not > involve the ZooKeeper client > > > Key: HBASE-18095 > URL: https://issues.apache.org/jira/browse/HBASE-18095 > Project: HBase > Issue Type: New Feature > Components: Client >Reporter: Andrew Kyle Purtell >Assignee: Bharath Vissapragada >Priority: Major > Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch > > > Clients are required to connect to ZooKeeper to find the location of the > regionserver hosting the meta table region. Site configuration provides the > client a list of ZK quorum peers and the client uses an embedded ZK client to > query meta location. Timeouts and retry behavior of this embedded ZK client > are managed orthogonally to HBase layer settings and in some cases the ZK > cannot manage what in theory the HBase client can, i.e. fail fast upon outage > or network partition. > We should consider new configuration settings that provide a list of > well-known master and backup master locations, and with this information the > client can contact any of the master processes directly. Any master in either > active or passive state will track meta location and respond to requests for > it with its cached last known location. If this location is stale, the client > can ask again with a flag set that requests the master refresh its location > cache and return the up-to-date location. Every client interaction with the > cluster thus uses only HBase RPC as transport, with appropriate settings > applied to the connection. The configuration toggle that enables this > alternative meta location lookup should be false by default. > This removes the requirement that HBase clients embed the ZK client and > contact the ZK service directly at the beginning of the connection lifecycle. > This has several benefits. ZK service need not be exposed to clients, and > their potential abuse, yet no benefit ZK provides the HBase server cluster is > compromised. Normalizing HBase client and ZK client timeout settings and > retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer > necessary. > And, from [~ghelmling]: There is an additional complication here for > token-based authentication. When a delegation token is used for SASL > authentication, the client uses the cluster ID obtained from Zookeeper to > select the token identifier to use. So there would also need to be some > Zookeeper-less, unauthenticated way to obtain the cluster ID as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23380) General Cleanup of FSUtil
[ https://issues.apache.org/jira/browse/HBASE-23380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995131#comment-16995131 ] HBase QA commented on HBASE-23380: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 8s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} hbase-server: The patch generated 0 new + 66 unchanged - 8 fixed = 66 total (was 74) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 37s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}169m 44s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}227m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.quotas.TestClusterScopeQuotaThrottle | | | hadoop.hbase.quotas.TestQuotaAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/912 | | JIRA Issue | HBASE-23380 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 8deb41272a95 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[GitHub] [hbase] Apache-HBase commented on issue #912: HBASE-23380: General Cleanup of FSUtil
Apache-HBase commented on issue #912: HBASE-23380: General Cleanup of FSUtil URL: https://github.com/apache/hbase/pull/912#issuecomment-565196293 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 33s | master passed | | +1 :green_heart: | compile | 0m 56s | master passed | | +1 :green_heart: | checkstyle | 1m 21s | master passed | | +1 :green_heart: | shadedjars | 4m 37s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | master passed | | +0 :ok: | spotbugs | 4m 8s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 5s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 3s | the patch passed | | +1 :green_heart: | compile | 0m 55s | the patch passed | | +1 :green_heart: | javac | 0m 55s | the patch passed | | +1 :green_heart: | checkstyle | 1m 20s | hbase-server: The patch generated 0 new + 66 unchanged - 8 fixed = 66 total (was 74) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 37s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 15m 46s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | | +1 :green_heart: | findbugs | 4m 20s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 169m 44s | hbase-server in the patch failed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 227m 14s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hbase.quotas.TestClusterScopeQuotaThrottle | | | hadoop.hbase.quotas.TestQuotaAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/912 | | JIRA Issue | HBASE-23380 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 8deb41272a95 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-912/out/precommit/personality/provided.sh | | git revision | master / 85a081925b | | Default Java | 1.8.0_181 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/4/artifact/out/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/4/testReport/ | | Max. process+thread count | 5107 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-912/4/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-17115) HMaster/HRegion Info Server does not honour admin.acl
[ https://issues.apache.org/jira/browse/HBASE-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995126#comment-16995126 ] Josh Elser commented on HBASE-17115: Patch up at https://github.com/apache/hbase/pull/936 > HMaster/HRegion Info Server does not honour admin.acl > - > > Key: HBASE-17115 > URL: https://issues.apache.org/jira/browse/HBASE-17115 > Project: HBase > Issue Type: Bug >Reporter: Mohammad Arshad >Assignee: Josh Elser >Priority: Major > > Currently there is no way to enable protected URLs like /jmx, /conf only > for admins. This is applicable for both Master and RegionServer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] joshelser commented on issue #936: HBASE-17115 Define UI admins via an ACL
joshelser commented on issue #936: HBASE-17115 Define UI admins via an ACL URL: https://github.com/apache/hbase/pull/936#issuecomment-565191500 Testing I've done: * New unit tests * Explicit admin user defined in configuration (`hbase.security.authentication.spnego.admin.users`) * `curl` with both the admin and a non-admin * The above along with `hbase.master.ui.readonly=true` as well. If admins are set, that will limit who can interact with sensitive endpoints. Setting readonly=true will further restrict the system and disallow anyone (including admins) from modifying hbase via the UI. I think this maintains all of the previous semantics folks would expect, while letting the security-conscious lock things down. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] joshelser opened a new pull request #936: HBASE-17115 Define UI admins via an ACL
joshelser opened a new pull request #936: HBASE-17115 Define UI admins via an ACL URL: https://github.com/apache/hbase/pull/936 The Hadoop AccessControlList allows us to specify admins of the webUI via a list of users and/or groups. Admins of the WebUI can mutate the system, potentially seeing sensitive data or modifying the system. hbase.security.authentication.spnego.admin.users is a comma-separated list of users who are admins. hbase.security.authentication.spnego.admin.groups is a comma-separated list of groups whose membership are admins. Either of these configuration properties may also contain an asterisk (*) which denotes "anything" (any user or group). To maintain previous semantics, the UI defaults to accepting any user as an admin. Previously, when a user was denied from some endpoint that was designated for admins, they received an HTTP/401. In this case, it is more correct to return HTTP/403 as they were correctly authenticated, but they were disallowed from fetching the given resource. The test is based off of work by Nihal Jain in HBASE-20472. Co-authored-by: Nihal Jain This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] belugabehr commented on issue #913: HBASE-23381: Improve Logging in HBase Commons Package
belugabehr commented on issue #913: HBASE-23381: Improve Logging in HBase Commons Package URL: https://github.com/apache/hbase/pull/913#issuecomment-565179564 Sorry. I got all mixed up in my local repo. I ended up rebasing to squash my three commits and doing a force push. Sorry for the inconvenience. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #913: HBASE-23381: Improve Logging in HBase Commons Package
Apache-HBase commented on issue #913: HBASE-23381: Improve Logging in HBase Commons Package URL: https://github.com/apache/hbase/pull/913#issuecomment-565172296 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 6s | https://github.com/apache/hbase/pull/913 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/913 | | JIRA Issue | HBASE-23381 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/4/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23381) Improve Logging in HBase Commons Package
[ https://issues.apache.org/jira/browse/HBASE-23381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16995051#comment-16995051 ] HBase QA commented on HBASE-23381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} https://github.com/apache/hbase/pull/913 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | GITHUB PR | https://github.com/apache/hbase/pull/913 | | JIRA Issue | HBASE-23381 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-913/4/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. > Improve Logging in HBase Commons Package > > > Key: HBASE-23381 > URL: https://issues.apache.org/jira/browse/HBASE-23381 > Project: HBase > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > > Based on my observations and suggestions here: > https://lists.apache.org/thread.html/71f546c89ecdaa2f25a26bd238e88680ddaad1d3b5c4031338d3533a%40%3Cdev.hbase.apache.org%3E -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] joshelser commented on issue #884: HBASE-23347 Allowable custom authentication methods for RPCs
joshelser commented on issue #884: HBASE-23347 Allowable custom authentication methods for RPCs URL: https://github.com/apache/hbase/pull/884#issuecomment-565172308 > I have a question concerning compatibility though it is supposed to be non-compat in 3.0 and it can be set aside as an upcoming if we want: have you tested 1.x or 2.x client talks to 3.x server? Nothing on the wire has actually changed. We're still using the auth byte and parsing things exactly how we did before. But, no, I've not explicitly tested that. > IIRC the main use case was the ability to enable security on an existing cluster without downtime HBASE-14700 Support a "permissive" mode for secure clusters to allow "simple" auth clients Thanks! I think this is less of an issue now. I had punted on this originally, but Wellington's reviews lead me back here and fixed it up (at least, according to the unit tests). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] joshelser commented on issue #884: HBASE-23347 Allowable custom authentication methods for RPCs
joshelser commented on issue #884: HBASE-23347 Allowable custom authentication methods for RPCs URL: https://github.com/apache/hbase/pull/884#issuecomment-565166353 > I haven't seen much difference comparing to the old token based authentication, so I'm a bit nervous that we doing a lot of work and then, no one will actually use it... Yeah, it's specifically on the roadmap at Cloudera. I think the unit test I provided gives that impression that we're aren't doing much different since we're not really doing anything fancy server-side. > Can we add a more reasonable example in the hbase-example module, to say that, we do have different authentication methods, comparing to the old provided methods? I've been chatting with Busbey and Wellington about what would be a non-contrived and semi-representative example. It's hard to come up with some single implementation because it 1. relies on lots of infrastructure that is dependent on the company/organization (e.g. ActiveDirectory or PKI) 2. has business/data dependent security policies that have to be applied (e.g. encryption strength) That said, I'm happy to try to put an example together to demonstrate this. The best thing I've been able to come up with is making a userdatabase from a file in HDFS (either a flat file or a JKS), and wiring up HBase to check against that. How does that strike you? Obviously not ready to be deployed in some organization, but sufficiently decoupled that we can keep maintaining it and (hopefully) representative of what you can do. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase-operator-tools] z-york commented on a change in pull request #46: HBASE-23562 [operator tools] Add a RegionsMerge tool that allows for …
z-york commented on a change in pull request #46: HBASE-23562 [operator tools] Add a RegionsMerge tool that allows for … URL: https://github.com/apache/hbase-operator-tools/pull/46#discussion_r357335105 ## File path: hbase-hbck2/src/main/java/org/apache/hbase/RegionsMerger.java ## @@ -0,0 +1,160 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hbase; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.util.Pair; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Future; + +public class RegionsMerger extends Configured implements org.apache.hadoop.util.Tool { Review comment: Yeah sorry for calling out stuff you already had called out in your description... I went straight to the code :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (HBASE-23569) Validate that the log cleaner actually cleans oldWALs as expected
Andrew Kyle Purtell created HBASE-23569: --- Summary: Validate that the log cleaner actually cleans oldWALs as expected Key: HBASE-23569 URL: https://issues.apache.org/jira/browse/HBASE-23569 Project: HBase Issue Type: Test Components: integration tests, master, test Reporter: Andrew Kyle Purtell Fix For: 3.0.0, 2.3.0, 1.6.0 The fix for HBASE-23287 (LogCleaner is not added to choreService) is in but we are lacking test coverage that validates that the log cleaner actually cleans oldWALs as expected. Add the test. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase-connectors] asf-ci commented on issue #53: HBASE-23565 Execute tests in hbase-connectors precommit
asf-ci commented on issue #53: HBASE-23565 Execute tests in hbase-connectors precommit URL: https://github.com/apache/hbase-connectors/pull/53#issuecomment-565139343 Refer to this link for build results (access rights to CI server needed): https://builds.apache.org/job/PreCommit-HBASE-CONNECTORS-Build/97/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services