[jira] [Commented] (HBASE-19633) Clean up the replication queues in the postPeerModification stage when removing a peer
[ https://issues.apache.org/jira/browse/HBASE-19633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307419#comment-16307419 ] Hadoop QA commented on HBASE-19633: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HBASE-19397 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 57s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 48s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} HBASE-19397 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} hbase-client: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch hbase-replication passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} hbase-server: The patch generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 10 unchanged - 2 fixed = 10 total (was 12) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 45s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hbase-replication in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 41s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 58s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}1
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Attachment: HBASE-19358-branch-1-v3.patch > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Attachments: HBASE-18619-branch-2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-v1.patch, HBASE-19358-v4.patch, > HBASE-19358-v5.patch, HBASE-19358-v6.patch, HBASE-19358-v7.patch, > HBASE-19358-v8.patch, HBASE-19358.patch > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12902997/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12902998/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19633) Clean up the replication queues in the postPeerModification stage when removing a peer
[ https://issues.apache.org/jira/browse/HBASE-19633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19633: -- Attachment: HBASE-19633-HBASE-19397-v3.patch Fix failed UT. > Clean up the replication queues in the postPeerModification stage when > removing a peer > -- > > Key: HBASE-19633 > URL: https://issues.apache.org/jira/browse/HBASE-19633 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HBASE-19633-HBASE-19397-v1.patch, > HBASE-19633-HBASE-19397-v2.patch, HBASE-19633-HBASE-19397-v3.patch, > HBASE-19633-HBASE-19397.patch > > > In the previous implementation, we can not always cleanly remove all the > replication queues when removing a peer since the removing work is done by RS > and if an RS is crashed then some queues may left there forever. That's why > we need to check if there are already some queues for a newly created peer > since we may reuse the peer id and causes problem. > With the new procedure based replication peer modification, I think we can do > it cleanly. After the RefreshPeerProcedures are done on all RSes, we can make > sure that no RS will create queue for this peer again, then we can iterate > over all the queues for all Rses and do another round of clean up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19658) Fix and reenable TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap
[ https://issues.apache.org/jira/browse/HBASE-19658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anastasia Braginsky updated HBASE-19658: Attachment: HBASE-19658-V02.patch > Fix and reenable > TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap > -- > > Key: HBASE-19658 > URL: https://issues.apache.org/jira/browse/HBASE-19658 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 2.0.0-beta-1 >Reporter: stack >Assignee: Anastasia Braginsky > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19658-V01.patch, HBASE-19658-V02.patch > > > testFlatteningToJumboCellChunkMap was disabled so could commit HBASE-19282 on > branch-2. This test is failing reliably. Assigned to [~anastas]. This issue > is about fixing the failing test and reenabling it in time for beta-2. Thanks > A. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19658) Fix and reenable TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap
[ https://issues.apache.org/jira/browse/HBASE-19658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307432#comment-16307432 ] Hadoop QA commented on HBASE-19658: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 42s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 31s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 37s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestMemstoreLABWithoutPool | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19658 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904123/HBASE-19658-V02.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux ced5a6a001d0 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 6c2aa4c9cc | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/10825/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10825/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10825/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated.
[jira] [Updated] (HBASE-19680) BufferedMutatorImpl#mutate should wait the result from AP in order to throw the failed mutations
[ https://issues.apache.org/jira/browse/HBASE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19680: --- Attachment: HBASE-19680.v0.patch > BufferedMutatorImpl#mutate should wait the result from AP in order to throw > the failed mutations > > > Key: HBASE-19680 > URL: https://issues.apache.org/jira/browse/HBASE-19680 > Project: HBase > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19680.v0.patch > > > Currently, BMI#mutate doesn't wait the result from AP so the errors are > stored in AP. The only way which can return the errors to user is, calling > the flush to catch the exception. That is non-intuitive. > I feel BMI#mutate should wait the result. That is to say, user can parse the > exception thrown by BM#mutate to get the failed mutations. Also, we can > remove the global error from AP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19680) BufferedMutatorImpl#mutate should wait the result from AP in order to throw the failed mutations
[ https://issues.apache.org/jira/browse/HBASE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19680: --- Status: Patch Available (was: Open) > BufferedMutatorImpl#mutate should wait the result from AP in order to throw > the failed mutations > > > Key: HBASE-19680 > URL: https://issues.apache.org/jira/browse/HBASE-19680 > Project: HBase > Issue Type: Improvement >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19680.v0.patch > > > Currently, BMI#mutate doesn't wait the result from AP so the errors are > stored in AP. The only way which can return the errors to user is, calling > the flush to catch the exception. That is non-intuitive. > I feel BMI#mutate should wait the result. That is to say, user can parse the > exception thrown by BM#mutate to get the failed mutations. Also, we can > remove the global error from AP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19675) Miscellaneous HStore Class Improvements
[ https://issues.apache.org/jira/browse/HBASE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307435#comment-16307435 ] Chia-Ping Tsai commented on HBASE-19675: {code} -if (files == null || files.isEmpty()) { - return new ArrayList<>(); +if (CollectionUtils.isEmpty(files)) { + return Collections.emptyList(); } {code} Could we file a jira to address such a issue?There are 2xx same pattern in our hbase. > Miscellaneous HStore Class Improvements > --- > > Key: HBASE-19675 > URL: https://issues.apache.org/jira/browse/HBASE-19675 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19675.1.patch, HBASE-19675.2.patch > > > * Remove logging code guards in favor of slf4j parameters > * Use {{CollectionsUtils.isEmpty()}} consistently > * Small check-style fixes > * Remove flow control logic from {{trace}} statement {code} if > (LOG.isTraceEnabled()) { > LOG.trace("No compacted files to archive"); > return; > }{code} > * Replace two calls to the same getter to ensure that the value doesn't > change between calls {code} if (getCompactedFiles() != null) { > for (HStoreFile file : getCompactedFiles()) { > name2File.put(file.getFileInfo().getActiveFileName(), file); > } > }{code} > * Make 'inputFiles' a Set for fast calls to {{contains}} method instead > {code}//some of the input files might already be deleted > List inputStoreFiles = new > ArrayList<>(compactionInputs.size()); > for (HStoreFile sf : this.getStorefiles()) { > if (inputFiles.contains(sf.getPath().getName())) { > inputStoreFiles.add(sf); > } > }{code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307437#comment-16307437 ] Hadoop QA commented on HBASE-19358: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 56s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} hbase-server: The patch generated 0 new + 67 unchanged - 2 fixed = 67 total (was 69) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 24s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}142m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 | | JIRA Issue | HBASE-19358 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904121/HBASE-19358-branch-1-v3.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopch
[jira] [Commented] (HBASE-19633) Clean up the replication queues in the postPeerModification stage when removing a peer
[ https://issues.apache.org/jira/browse/HBASE-19633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307452#comment-16307452 ] Hadoop QA commented on HBASE-19633: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HBASE-19397 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 39s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 44s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} HBASE-19397 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hbase-client: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch hbase-replication passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} hbase-server: The patch generated 0 new + 9 unchanged - 1 fixed = 9 total (was 10) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 10 unchanged - 2 fixed = 10 total (was 12) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 43s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 0s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hbase-replication in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 55s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 53s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}1
[jira] [Updated] (HBASE-19658) Fix and reenable TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap
[ https://issues.apache.org/jira/browse/HBASE-19658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anastasia Braginsky updated HBASE-19658: Attachment: HBASE-19658-V03.patch > Fix and reenable > TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap > -- > > Key: HBASE-19658 > URL: https://issues.apache.org/jira/browse/HBASE-19658 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 2.0.0-beta-1 >Reporter: stack >Assignee: Anastasia Braginsky > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19658-V01.patch, HBASE-19658-V02.patch, > HBASE-19658-V03.patch > > > testFlatteningToJumboCellChunkMap was disabled so could commit HBASE-19282 on > branch-2. This test is failing reliably. Assigned to [~anastas]. This issue > is about fixing the failing test and reenabling it in time for beta-2. Thanks > A. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19424) Metrics servlet doesn't work
[ https://issues.apache.org/jira/browse/HBASE-19424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-19424: - Attachment: HBASE-19424.branch-1.patch > Metrics servlet doesn't work > > > Key: HBASE-19424 > URL: https://issues.apache.org/jira/browse/HBASE-19424 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Andrew Purtell >Priority: Minor > Fix For: 1.4.1, 1.5.0 > > Attachments: HBASE-19424.branch-1.patch > > > In branch-1 at least we put up a servlet on "/metrics" that is Hadoop's > MetricsServlet. However HBase users are expected to pick up metrics via > "/jmx". We don't mention "/metrics" or link to it on the UI. If you attempt > to access "/metrics" with head of branch-1 it errors out due to a NPE > {noformat} > 2017-12-04 16:06:37,403 ERROR [1874557409@qtp-1910896157-3] mortbay.log: > /metrics > java.lang.NullPointerException > at > org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1049) > at > org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19424) Metrics servlet doesn't work
[ https://issues.apache.org/jira/browse/HBASE-19424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-19424: - Assignee: Toshihiro Suzuki Status: Patch Available (was: Open) > Metrics servlet doesn't work > > > Key: HBASE-19424 > URL: https://issues.apache.org/jira/browse/HBASE-19424 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Andrew Purtell >Assignee: Toshihiro Suzuki >Priority: Minor > Fix For: 1.4.1, 1.5.0 > > Attachments: HBASE-19424.branch-1.patch > > > In branch-1 at least we put up a servlet on "/metrics" that is Hadoop's > MetricsServlet. However HBase users are expected to pick up metrics via > "/jmx". We don't mention "/metrics" or link to it on the UI. If you attempt > to access "/metrics" with head of branch-1 it errors out due to a NPE > {noformat} > 2017-12-04 16:06:37,403 ERROR [1874557409@qtp-1910896157-3] mortbay.log: > /metrics > java.lang.NullPointerException > at > org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1049) > at > org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19424) Metrics servlet doesn't work
[ https://issues.apache.org/jira/browse/HBASE-19424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307463#comment-16307463 ] Toshihiro Suzuki commented on HBASE-19424: -- I just attached a patch. It seems that we need to set org.apache.hadoop.http.HttpServer2.CONF_CONTEXT_ATTRIBUTE ("hadoop.conf") as a servlet context attribute. > Metrics servlet doesn't work > > > Key: HBASE-19424 > URL: https://issues.apache.org/jira/browse/HBASE-19424 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Andrew Purtell >Assignee: Toshihiro Suzuki >Priority: Minor > Fix For: 1.4.1, 1.5.0 > > Attachments: HBASE-19424.branch-1.patch > > > In branch-1 at least we put up a servlet on "/metrics" that is Hadoop's > MetricsServlet. However HBase users are expected to pick up metrics via > "/jmx". We don't mention "/metrics" or link to it on the UI. If you attempt > to access "/metrics" with head of branch-1 it errors out due to a NPE > {noformat} > 2017-12-04 16:06:37,403 ERROR [1874557409@qtp-1910896157-3] mortbay.log: > /metrics > java.lang.NullPointerException > at > org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1049) > at > org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19478) Utilize multi-get to speed up WAL file checking in BackupLogCleaner
[ https://issues.apache.org/jira/browse/HBASE-19478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19478: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Thanks for the patch, Toshihiro > Utilize multi-get to speed up WAL file checking in BackupLogCleaner > --- > > Key: HBASE-19478 > URL: https://issues.apache.org/jira/browse/HBASE-19478 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Toshihiro Suzuki > Fix For: 3.0.0 > > Attachments: HBASE-19478.patch, HBASE-19478.v2.patch, > HBASE-19478.v3.patch > > > Currently BackupLogCleaner#getDeletableFiles() issues one Get per WAL file: > {code} > for (FileStatus file : files) { > String wal = file.getPath().toString(); > boolean logInSystemTable = table.isWALFileDeletable(wal); > {code} > This is rather inefficient considering the number of WAL files in production > can get quite large. > We should use multi-get to reduce the number of calls to backup table (which > normally resides on another server). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19680) BufferedMutatorImpl#mutate should wait the result from AP in order to throw the failed mutations
[ https://issues.apache.org/jira/browse/HBASE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307474#comment-16307474 ] Hadoop QA commented on HBASE-19680: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 8s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} hbase-client: The patch generated 0 new + 112 unchanged - 7 fixed = 112 total (was 119) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} The patch hbase-server passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 35s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 43s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 18s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19680 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904124/HBASE-19680.v0.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c494628274d9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 6c2aa4c9cc | | maven | version: Apache M
[jira] [Commented] (HBASE-19658) Fix and reenable TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap
[ https://issues.apache.org/jira/browse/HBASE-19658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307494#comment-16307494 ] Hadoop QA commented on HBASE-19658: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 39s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 37s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}102m 56s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19658 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904126/HBASE-19658-V03.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a0bd53a9c892 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 6c2aa4c9cc | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10827/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10827/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Fix and reenable > TestCompactingToCellFlatMapMemStore#testFlatteningToJumboCellChunkMap > -- > > Key: HBASE-19
[jira] [Commented] (HBASE-19424) Metrics servlet doesn't work
[ https://issues.apache.org/jira/browse/HBASE-19424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307505#comment-16307505 ] Hadoop QA commented on HBASE-19424: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 6s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 56s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 44s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.5 2.7.4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 13s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 | | JIRA Issue | HBASE-19424 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904127/HBASE-19424.branch-1.patch | | Optional Tests | asflicense
[jira] [Commented] (HBASE-19478) Utilize multi-get to speed up WAL file checking in BackupLogCleaner
[ https://issues.apache.org/jira/browse/HBASE-19478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307514#comment-16307514 ] Hudson commented on HBASE-19478: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4325 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4325/]) HBASE-19478 Utilize multi-get to speed up WAL file checking in (tedyu: rev cafd4e4ad76f45be912edc9d5021f872de94fd5c) * (edit) hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/master/BackupLogCleaner.java * (edit) hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java * (edit) hbase-backup/src/test/java/org/apache/hadoop/hbase/backup/TestBackupSystemTable.java > Utilize multi-get to speed up WAL file checking in BackupLogCleaner > --- > > Key: HBASE-19478 > URL: https://issues.apache.org/jira/browse/HBASE-19478 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Toshihiro Suzuki > Fix For: 3.0.0 > > Attachments: HBASE-19478.patch, HBASE-19478.v2.patch, > HBASE-19478.v3.patch > > > Currently BackupLogCleaner#getDeletableFiles() issues one Get per WAL file: > {code} > for (FileStatus file : files) { > String wal = file.getPath().toString(); > boolean logInSystemTable = table.isWALFileDeletable(wal); > {code} > This is rather inefficient considering the number of WAL files in production > can get quite large. > We should use multi-get to reduce the number of calls to backup table (which > normally resides on another server). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19681) Online snapshot creation failing with missing store file
Anirban Roy created HBASE-19681: --- Summary: Online snapshot creation failing with missing store file Key: HBASE-19681 URL: https://issues.apache.org/jira/browse/HBASE-19681 Project: HBase Issue Type: Bug Components: backup&restore, snapshots Affects Versions: 1.3.0 Environment: Hadoop - 2.7.3 HBase 1.3.0 OS - GNU/Linux x86_64 Cluster - Amazon Elastic Mapreduce Reporter: Anirban Roy We are facing problem creating online snapshot of our HBase table. The table contains 20TB data and receiving ~1 writes per second. The snapshot creating failing intermittently with error that some hfile missing, see the detailed output below. Once we locate the region server hosting the region and restart the region server, snapshot creation succeeds. It seems the missing hfile removed due to minor compaction, but region server still holds the pointer to the file. [hadoop@ip-10-0-12-164 ~]$ hbase shell HBase Shell; enter 'help' for list of supported commands. Type "exit" to leave the HBase Shell Version 1.3.0, rUnknown, Fri Feb 17 18:15:07 UTC 2017  hbase(main):001:0> snapshot ‘x_table’, ‘x_snapshot’  ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { ss=x_snapshot table=x_table type=FLUSH } had an error. Procedure x_snapshot { waiting=[] done=[ip-10-0-9-31.ec2.internal,16020,1508372578254, ip-10-0-0-32.ec2.internal,16020,1508372591059, ip-10-0-14-221.ec2.internal,16020,1508372580873, ip-10-0-15-185.ec2.internal,16020,1508372588507, ip-10-0-9-43.ec2.internal,16020,1508372569107, ip-10-0-10-62.ec2.internal,16020,1512885921693, ip-10-0-8-216.ec2.internal,16020,1508372584133, ip-10-0-1-207.ec2.internal,16020,1508372580144, ip-10-0-0-173.ec2.internal,16020,1508372584969, ip-10-0-4-79.ec2.internal,16020,1508372587161, ip-10-0-3-165.ec2.internal,16020,1508372593566, ip-10-0-14-137.ec2.internal,16020,1508372583225, ip-10-0-6-33.ec2.internal,16020,1508372581587, ip-10-0-15-199.ec2.internal,16020,1508372587478, ip-10-0-5-253.ec2.internal,16020,1508372581243, ip-10-0-1-99.ec2.internal,16020,1508372609684] }     at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:354)     at org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1058)     at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:61089)     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2328)     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)     at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)     at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) Caused by: org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via ip-10-0-3-13.ec2.internal,16020,1508372563772:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: java.io.FileNotFoundException: File does not exist: hdfs://ip-10-0-12-164.ec2.internal:8020/user/hbase/data/default/x_table/ecbb3aeaf7c5b1f65742deab5812362c/d/f76d8827c29244b99bf9344982956523     at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)     at org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:315)     at org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:344)     ... 6 more Caused by: org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: java.io.FileNotFoundException: File does not exist: hdfs://ip-10-0-12-164.ec2.internal:8020/user/hbase/data/default/x_table/ecbb3aeaf7c5b1f65742deab5812362c/d/f76d8827c29244b99bf9344982956523     at org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:347)     at org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:140)     at org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:160)     at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:187)     at org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:53)     at java.util.concurrent.FutureTask.run(FutureTask.java:266)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)     at java.lang.Thread.run(Thread.java:745)  Here is some help for this command: Take a snapshot of specified table. Examples:   hbase> snapshot 'sourceTable', 'snapshotName'  hbase> snapshot
[jira] [Commented] (HBASE-19681) Online snapshot creation failing with missing store file
[ https://issues.apache.org/jira/browse/HBASE-19681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307522#comment-16307522 ] Anirban Roy commented on HBASE-19681: - We also want to know if there is any potential data loss due to this error. Looking at the region server log, we see a reference to the hfile, a few other hfiles compacted to this file but no reference that this particular hfile being compacted to newer hfile. But when we check HDFS, the file is really missing. Once the region server get restarted, it no more complains about the missing hfile. Hence, this is very important to know the behavior and any impact due that, before we get a fix here. > Online snapshot creation failing with missing store file > > > Key: HBASE-19681 > URL: https://issues.apache.org/jira/browse/HBASE-19681 > Project: HBase > Issue Type: Bug > Components: backup&restore, snapshots >Affects Versions: 1.3.0 > Environment: Hadoop - 2.7.3 > HBase 1.3.0 > OS - GNU/Linux x86_64 > Cluster - Amazon Elastic Mapreduce >Reporter: Anirban Roy > > We are facing problem creating online snapshot of our HBase table. The table > contains 20TB data and receiving ~1 writes per second. The snapshot > creating failing intermittently with error that some hfile missing, see the > detailed output below. Once we locate the region server hosting the region > and restart the region server, snapshot creation succeeds. It seems the > missing hfile removed due to minor compaction, but region server still holds > the pointer to the file. > [hadoop@ip-10-0-12-164 ~]$ hbase shell > HBase Shell; enter 'help' for list of supported commands. > Type "exit" to leave the HBase Shell > Version 1.3.0, rUnknown, Fri Feb 17 18:15:07 UTC 2017 >  > hbase(main):001:0> snapshot ‘x_table’, ‘x_snapshot’ >  > ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { > ss=x_snapshot table=x_table type=FLUSH } had an error. Procedure x_snapshot > { waiting=[] done=[ip-10-0-9-31.ec2.internal,16020,1508372578254, > ip-10-0-0-32.ec2.internal,16020,1508372591059, > ip-10-0-14-221.ec2.internal,16020,1508372580873, > ip-10-0-15-185.ec2.internal,16020,1508372588507, > ip-10-0-9-43.ec2.internal,16020,1508372569107, > ip-10-0-10-62.ec2.internal,16020,1512885921693, > ip-10-0-8-216.ec2.internal,16020,1508372584133, > ip-10-0-1-207.ec2.internal,16020,1508372580144, > ip-10-0-0-173.ec2.internal,16020,1508372584969, > ip-10-0-4-79.ec2.internal,16020,1508372587161, > ip-10-0-3-165.ec2.internal,16020,1508372593566, > ip-10-0-14-137.ec2.internal,16020,1508372583225, > ip-10-0-6-33.ec2.internal,16020,1508372581587, > ip-10-0-15-199.ec2.internal,16020,1508372587478, > ip-10-0-5-253.ec2.internal,16020,1508372581243, > ip-10-0-1-99.ec2.internal,16020,1508372609684] } >     at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:354) >     at > org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1058) >     at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:61089) >     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2328) >     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) >     at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) >     at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via > ip-10-0-3-13.ec2.internal,16020,1508372563772:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.io.FileNotFoundException: File does not exist: > hdfs://ip-10-0-12-164.ec2.internal:8020/user/hbase/data/default/x_table/ecbb3aeaf7c5b1f65742deab5812362c/d/f76d8827c29244b99bf9344982956523 >     at > org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83) >     at > org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:315) >     at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:344) >     ... 6 more > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.io.FileNotFoundException: File does not exist: > hdfs://ip-10-0-12-164.ec2.internal:8020/user/hbase/data/default/x_table/ecbb3aeaf7c5b1f65742deab5812362c/d/f76d8827c29244b99bf9344982956523 >     at > org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:347) >     at > org.apache.
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307521#comment-16307521 ] Mike Drob commented on HBASE-19369: --- When hadoop is configured with Erasure Coding (EC - new in 3.0 and maybe 2.9 I think) then our WALs don't work because EC doesn't support hsync or hflush. To get around this, we can still request from the NN that the file we create use replication instead of EC but we can only do so using the builder API, which doesn't exist in the older versions that we support. Running on hadoop 2.8, we can call {{create}} and {{createNonRecursive}} and the files will be normal (replicated) files. Running on hadoop 3.0, we can call those same methods and the files will be either replicated or erasure coded depending on the policy configured. Running on hadoop 3.0, we can use new builder API to make sure that the files are replicated. > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19681) Online snapshot creation failing with missing store file
[ https://issues.apache.org/jira/browse/HBASE-19681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307526#comment-16307526 ] Ted Yu commented on HBASE-19681: Can you upload region server log so that we know more about f76d8827c29244b99bf9344982956523 ? If possible, please upgrade to 1.4.0 which has related fixes such as: HBASE-19468 FNFE during scans and flushes > Online snapshot creation failing with missing store file > > > Key: HBASE-19681 > URL: https://issues.apache.org/jira/browse/HBASE-19681 > Project: HBase > Issue Type: Bug > Components: backup&restore, snapshots >Affects Versions: 1.3.0 > Environment: Hadoop - 2.7.3 > HBase 1.3.0 > OS - GNU/Linux x86_64 > Cluster - Amazon Elastic Mapreduce >Reporter: Anirban Roy > > We are facing problem creating online snapshot of our HBase table. The table > contains 20TB data and receiving ~1 writes per second. The snapshot > creating failing intermittently with error that some hfile missing, see the > detailed output below. Once we locate the region server hosting the region > and restart the region server, snapshot creation succeeds. It seems the > missing hfile removed due to minor compaction, but region server still holds > the pointer to the file. > [hadoop@ip-10-0-12-164 ~]$ hbase shell > HBase Shell; enter 'help' for list of supported commands. > Type "exit" to leave the HBase Shell > Version 1.3.0, rUnknown, Fri Feb 17 18:15:07 UTC 2017 >  > hbase(main):001:0> snapshot ‘x_table’, ‘x_snapshot’ >  > ERROR: org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { > ss=x_snapshot table=x_table type=FLUSH } had an error. Procedure x_snapshot > { waiting=[] done=[ip-10-0-9-31.ec2.internal,16020,1508372578254, > ip-10-0-0-32.ec2.internal,16020,1508372591059, > ip-10-0-14-221.ec2.internal,16020,1508372580873, > ip-10-0-15-185.ec2.internal,16020,1508372588507, > ip-10-0-9-43.ec2.internal,16020,1508372569107, > ip-10-0-10-62.ec2.internal,16020,1512885921693, > ip-10-0-8-216.ec2.internal,16020,1508372584133, > ip-10-0-1-207.ec2.internal,16020,1508372580144, > ip-10-0-0-173.ec2.internal,16020,1508372584969, > ip-10-0-4-79.ec2.internal,16020,1508372587161, > ip-10-0-3-165.ec2.internal,16020,1508372593566, > ip-10-0-14-137.ec2.internal,16020,1508372583225, > ip-10-0-6-33.ec2.internal,16020,1508372581587, > ip-10-0-15-199.ec2.internal,16020,1508372587478, > ip-10-0-5-253.ec2.internal,16020,1508372581243, > ip-10-0-1-99.ec2.internal,16020,1508372609684] } >     at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:354) >     at > org.apache.hadoop.hbase.master.MasterRpcServices.isSnapshotDone(MasterRpcServices.java:1058) >     at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:61089) >     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2328) >     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123) >     at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188) >     at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168) > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via > ip-10-0-3-13.ec2.internal,16020,1508372563772:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.io.FileNotFoundException: File does not exist: > hdfs://ip-10-0-12-164.ec2.internal:8020/user/hbase/data/default/x_table/ecbb3aeaf7c5b1f65742deab5812362c/d/f76d8827c29244b99bf9344982956523 >     at > org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83) >     at > org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:315) >     at > org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:344) >     ... 6 more > Caused by: > org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: > java.io.FileNotFoundException: File does not exist: > hdfs://ip-10-0-12-164.ec2.internal:8020/user/hbase/data/default/x_table/ecbb3aeaf7c5b1f65742deab5812362c/d/f76d8827c29244b99bf9344982956523 >     at > org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:347) >     at > org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:140) >     at > org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:160) >     at > org.apache.hadoop.hbase.procedure.Subpro
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307528#comment-16307528 ] Ted Yu commented on HBASE-19369: {code} + String dfsClassName = "org.apache.hadoop.hdfs.DistributedFileSystem"; + String builderClassName = dfsClassName + ".HdfsDataOutputStreamBuilder"; {code} Shouldn't '$' be used to form the class name for HdfsDataOutputStreamBuilder ? {code} +builderClass = Class.forName(builderClassName); + } catch (ClassNotFoundException e) { +LOG.info("{} not available, will not use builder API for file creation.", builderClassName); {code} Suggest changing the log level above to DEBUG. > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307623#comment-16307623 ] Duo Zhang commented on HBASE-19369: --- {quote} or using FSDataOutputStreamBuilder#replicate() API to create 3x replication files in an erasure-coded directory. {quote} OK, got it. So the problem here is that, if we want to write WAL under an EC directory, we need to use the newly introduced builder API? > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19685) Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors
Chia-Ping Tsai created HBASE-19685: -- Summary: Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors Key: HBASE-19685 URL: https://issues.apache.org/jira/browse/HBASE-19685 Project: HBase Issue Type: Bug Components: test Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 2.0.0-beta-2 {code} java.lang.AssertionError at org.apache.hadoop.hbase.regionserver.TestFSErrorsExposed.testFullSystemBubblesFSErrors(TestFSErrorsExposed.java:221) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307625#comment-16307625 ] Duo Zhang commented on HBASE-19369: --- Then the approach here is OK, but maybe a better solution is to revisit our hadoop-compat modules? Maybe we need to introduce a hbase-hadoop3-compat module? For async dfs output, the reason I use reflection is that I need to use lots of IA.Private stuffs... > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307602#comment-16307602 ] Mike Drob commented on HBASE-19369: --- I'm not sure of the underlying technical reasons, all I have are the EC docs. http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html#Limitations {quote} Certain HDFS file write operations, i.e., hflush, hsync and append, are not supported on erasure coded files due to substantial technical challenges. * append() on an erasure coded file will throw IOException. * hflush() and hsync() on DFSStripedOutputStream are no-op. Thus calling hflush() or hsync() on an erasure coded file can not guarantee data being persistent. A client can use StreamCapabilities API to query whether a OutputStream supports hflush() and hsync(). If the client desires data persistence via hflush() and hsync(), the current remedy is creating such files as regular 3x replication files in a non-erasure-coded directory, or using FSDataOutputStreamBuilder#replicate() API to create 3x replication files in an erasure-coded directory. {quote} > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307604#comment-16307604 ] Duo Zhang commented on HBASE-19369: --- {quote} hflush() and hsync() on DFSStripedOutputStream are no-op. Thus calling hflush() or hsync() on an erasure coded file can not guarantee data being persistent. {quote} This is enough to say that WAL can not use it. We need to make sure the data is persistent, then we can return success to user. > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-19588) Additional jar dependencies needed for mapreduce PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-19588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai reassigned HBASE-19588: -- Assignee: Albert Chu > Additional jar dependencies needed for mapreduce PerformanceEvaluation > -- > > Key: HBASE-19588 > URL: https://issues.apache.org/jira/browse/HBASE-19588 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.4.0 >Reporter: Albert Chu >Assignee: Albert Chu >Priority: Minor > Attachments: HBASE-19588.branch-1.4.patch > > > I have a unit test that runs a simple PerformanceEvaluation test to make sure > things are basically working > {noformat} > bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=5 > sequentialWrite 1 > {noformat} > This test runs against Hadoop 2.7.0 and works against all past versions > 0.99.0 and up. It broke with 1.4.0 with the following error. > {noformat} > 2017-12-21 13:49:40,974 INFO [main] mapreduce.Job: Task Id : > attempt_1513892752187_0002_m_04_2, Status : FAILED > Error: java.io.IOException: java.lang.reflect.InvocationTargetException > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:297) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:250) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) > ... 12 more > Caused by: java.lang.RuntimeException: Could not create interface > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop > compatibility jar on the classpath? > at > org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:75) > at > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeper.(MetricsZooKeeper.java:38) > at > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:130) > at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:143) > at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:181) > at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:155) > at > org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.(ZooKeeperKeepAliveConnection.java:43) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1737) > at > org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:945) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:721) > ... 17 more > Caused by: java.util.ServiceConfigurationError: > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be > instantiated > at java.util.ServiceLoader.fail(ServiceLoader.java:224) > at java.util.ServiceLoader.access$100(ServiceLoader.java:181) > at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377) > at java.util.ServiceLoader$1.next(ServiceLoader.java:445) > at > org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(Comp
[jira] [Comment Edited] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307602#comment-16307602 ] Mike Drob edited comment on HBASE-19369 at 1/2/18 2:09 AM: --- I'm not sure of the underlying technical reasons, all I have are the EC docs. http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html#Limitations {quote} Certain HDFS file write operations, i.e., hflush, hsync and append, are not supported on erasure coded files due to substantial technical challenges. * append() on an erasure coded file will throw IOException. * hflush() and hsync() on DFSStripedOutputStream are no-op. Thus calling hflush() or hsync() on an erasure coded file can not guarantee data being persistent. A client can use StreamCapabilities API to query whether a OutputStream supports hflush() and hsync(). If the client desires data persistence via hflush() and hsync(), the current remedy is creating such files as regular 3x replication files in a non-erasure-coded directory, or using FSDataOutputStreamBuilder#replicate() API to create 3x replication files in an erasure-coded directory. {quote} was (Author: mdrob): I'm not sure of the underlying technical reasons, all I have are the EC docs. http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html#Limitations {quote} Certain HDFS file write operations, i.e., hflush, hsync and append, are not supported on erasure coded files due to substantial technical challenges. * append() on an erasure coded file will throw IOException. * hflush() and hsync() on DFSStripedOutputStream are no-op. Thus calling hflush() or hsync() on an erasure coded file can not guarantee data being persistent. A client can use StreamCapabilities API to query whether a OutputStream supports hflush() and hsync(). If the client desires data persistence via hflush() and hsync(), the current remedy is creating such files as regular 3x replication files in a non-erasure-coded directory, or using FSDataOutputStreamBuilder#replicate() API to create 3x replication files in an erasure-coded directory. {quote} > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19679) Superusers Logging and Data Structures
[ https://issues.apache.org/jira/browse/HBASE-19679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307593#comment-16307593 ] Hudson commented on HBASE-19679: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4327 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4327/]) HBASE-19679 Superusers Logging and Data Structures (BELUGA BEHR) (tedyu: rev 6708d544782b4e919908ddfdf1a34d02848e9388) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/security/Superusers.java > Superusers Logging and Data Structures > -- > > Key: HBASE-19679 > URL: https://issues.apache.org/jira/browse/HBASE-19679 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19679.1.patch > > > * Use Sets instead of List for search efficiency reasons > * Improve logging -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19679) Superusers.java Logging and Data Structures
[ https://issues.apache.org/jira/browse/HBASE-19679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19679: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Dupe of HBASE-19678 > Superusers.java Logging and Data Structures > --- > > Key: HBASE-19679 > URL: https://issues.apache.org/jira/browse/HBASE-19679 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19679.1.patch > > > * Use Sets instead of List for search efficiency reasons > * Improve logging -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19682) Use Collections.emptyList() For Empty List Values
[ https://issues.apache.org/jira/browse/HBASE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307576#comment-16307576 ] Hadoop QA commented on HBASE-19682: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 7m 30s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s{color} | {color:red} hbase-client: The patch generated 1 new + 188 unchanged - 1 fixed = 189 total (was 189) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s{color} | {color:red} hbase-server: The patch generated 2 new + 154 unchanged - 1 fixed = 156 total (was 155) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 50s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 48s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s{color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 55s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 16s{color} | {color:green} hbase-rest in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color
[jira] [Updated] (HBASE-19682) Use Collections.emptyList() For Empty List Values
[ https://issues.apache.org/jira/browse/HBASE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19682: Status: Patch Available (was: Open) > Use Collections.emptyList() For Empty List Values > - > > Key: HBASE-19682 > URL: https://issues.apache.org/jira/browse/HBASE-19682 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19682.1.patch > > > Use {{Collection.emptyList()}} for returning an empty list instead of > {{return new ArrayList<> ()}}. The default constructor creates a buffer of > size 10 for _ArrayList_ therefore, returning this static value saves on some > memory and GC pressure and saves time not having to allocate a new internally > buffer for each instantiation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19683) Remove Superfluous Methods From String Class
BELUGA BEHR created HBASE-19683: --- Summary: Remove Superfluous Methods From String Class Key: HBASE-19683 URL: https://issues.apache.org/jira/browse/HBASE-19683 Project: HBase Issue Type: Improvement Components: hbase Affects Versions: 3.0.0 Reporter: BELUGA BEHR Priority: Trivial * Remove isEmpty method * Remove repeat Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (HBASE-19678) HBase Admin security capabilities should be represented as a Set
[ https://issues.apache.org/jira/browse/HBASE-19678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR reopened HBASE-19678: - > HBase Admin security capabilities should be represented as a Set > > > Key: HBASE-19678 > URL: https://issues.apache.org/jira/browse/HBASE-19678 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19678.1.patch > > > {code:title=org.apache.hadoop.hbase.client.Admin} > /** >* Return the set of supported security capabilities. >* @throws IOException >* @throws UnsupportedOperationException >*/ > List getSecurityCapabilities() throws IOException; > {code} > The comment says a "set" but it returns a List. A Set would be the most > appropriate data structure here, an immutable one perhaps, because the code > that interacts with it looks up information using the _contains_ method which > would be served well by a Set. Please change this interface to return a Set. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Reopened] (HBASE-19679) Superusers Logging and Data Structures
[ https://issues.apache.org/jira/browse/HBASE-19679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reopened HBASE-19679: > Superusers Logging and Data Structures > -- > > Key: HBASE-19679 > URL: https://issues.apache.org/jira/browse/HBASE-19679 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19679.1.patch > > > * Use Sets instead of List for search efficiency reasons > * Improve logging -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307573#comment-16307573 ] Hadoop QA commented on HBASE-19683: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 44s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hbase-common: The patch generated 0 new + 10 unchanged - 1 fixed = 10 total (was 11) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s{color} | {color:red} hbase-client: The patch generated 1 new + 10 unchanged - 1 fixed = 11 total (was 11) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 48s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 8s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 43s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19683 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904138/HBASE-19683.1.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c1d58eb4e6fd 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /ho
[jira] [Updated] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19683: Attachment: HBASE-19683.1.patch > Remove Superfluous Methods From String Class > > > Key: HBASE-19683 > URL: https://issues.apache.org/jira/browse/HBASE-19683 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19683.1.patch > > > * Remove isEmpty method > * Remove repeat > Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19683: Status: Patch Available (was: Open) > Remove Superfluous Methods From String Class > > > Key: HBASE-19683 > URL: https://issues.apache.org/jira/browse/HBASE-19683 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19683.1.patch, HBASE-19683.2.patch > > > * Remove isEmpty method > * Remove repeat > Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HBASE-19679) Superusers Logging and Data Structures
[ https://issues.apache.org/jira/browse/HBASE-19679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HBASE-19679. Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0-beta-2 Committed patch from HBASE-19678 under this JIRA. > Superusers Logging and Data Structures > -- > > Key: HBASE-19679 > URL: https://issues.apache.org/jira/browse/HBASE-19679 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19679.1.patch > > > * Use Sets instead of List for search efficiency reasons > * Improve logging -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19675) Miscellaneous HStore Class Improvements
[ https://issues.apache.org/jira/browse/HBASE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307565#comment-16307565 ] BELUGA BEHR commented on HBASE-19675: - [~chia7712] [HBASE-19675] > Miscellaneous HStore Class Improvements > --- > > Key: HBASE-19675 > URL: https://issues.apache.org/jira/browse/HBASE-19675 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19675.1.patch, HBASE-19675.2.patch > > > * Remove logging code guards in favor of slf4j parameters > * Use {{CollectionsUtils.isEmpty()}} consistently > * Small check-style fixes > * Remove flow control logic from {{trace}} statement {code} if > (LOG.isTraceEnabled()) { > LOG.trace("No compacted files to archive"); > return; > }{code} > * Replace two calls to the same getter to ensure that the value doesn't > change between calls {code} if (getCompactedFiles() != null) { > for (HStoreFile file : getCompactedFiles()) { > name2File.put(file.getFileInfo().getActiveFileName(), file); > } > }{code} > * Make 'inputFiles' a Set for fast calls to {{contains}} method instead > {code}//some of the input files might already be deleted > List inputStoreFiles = new > ArrayList<>(compactionInputs.size()); > for (HStoreFile sf : this.getStorefiles()) { > if (inputFiles.contains(sf.getPath().getName())) { > inputStoreFiles.add(sf); > } > }{code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19683: Status: Patch Available (was: Open) > Remove Superfluous Methods From String Class > > > Key: HBASE-19683 > URL: https://issues.apache.org/jira/browse/HBASE-19683 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19683.1.patch > > > * Remove isEmpty method > * Remove repeat > Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19679) Superusers Logging and Data Structures
[ https://issues.apache.org/jira/browse/HBASE-19679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19679: --- Summary: Superusers Logging and Data Structures (was: Superusers.java Logging and Data Structures) > Superusers Logging and Data Structures > -- > > Key: HBASE-19679 > URL: https://issues.apache.org/jira/browse/HBASE-19679 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19679.1.patch > > > * Use Sets instead of List for search efficiency reasons > * Improve logging -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19675) Miscellaneous HStore Class Improvements
[ https://issues.apache.org/jira/browse/HBASE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307566#comment-16307566 ] BELUGA BEHR edited comment on HBASE-19675 at 1/1/18 10:11 PM: -- Please consider accepting this patch as-is while I work on [HBASE-19682] was (Author: belugabehr): Please consider accepting this patch while I work on [HBASE-19682] > Miscellaneous HStore Class Improvements > --- > > Key: HBASE-19675 > URL: https://issues.apache.org/jira/browse/HBASE-19675 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19675.1.patch, HBASE-19675.2.patch > > > * Remove logging code guards in favor of slf4j parameters > * Use {{CollectionsUtils.isEmpty()}} consistently > * Small check-style fixes > * Remove flow control logic from {{trace}} statement {code} if > (LOG.isTraceEnabled()) { > LOG.trace("No compacted files to archive"); > return; > }{code} > * Replace two calls to the same getter to ensure that the value doesn't > change between calls {code} if (getCompactedFiles() != null) { > for (HStoreFile file : getCompactedFiles()) { > name2File.put(file.getFileInfo().getActiveFileName(), file); > } > }{code} > * Make 'inputFiles' a Set for fast calls to {{contains}} method instead > {code}//some of the input files might already be deleted > List inputStoreFiles = new > ArrayList<>(compactionInputs.size()); > for (HStoreFile sf : this.getStorefiles()) { > if (inputFiles.contains(sf.getPath().getName())) { > inputStoreFiles.add(sf); > } > }{code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19678) HBase Admin security capabilities should be represented as a Set
[ https://issues.apache.org/jira/browse/HBASE-19678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307564#comment-16307564 ] BELUGA BEHR commented on HBASE-19678: - As mentioned in [HBASE-19679], this is actually still open. I just posted my patch for a different issue under this ticket. > HBase Admin security capabilities should be represented as a Set > > > Key: HBASE-19678 > URL: https://issues.apache.org/jira/browse/HBASE-19678 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19678.1.patch > > > {code:title=org.apache.hadoop.hbase.client.Admin} > /** >* Return the set of supported security capabilities. >* @throws IOException >* @throws UnsupportedOperationException >*/ > List getSecurityCapabilities() throws IOException; > {code} > The comment says a "set" but it returns a List. A Set would be the most > appropriate data structure here, an immutable one perhaps, because the code > that interacts with it looks up information using the _contains_ method which > would be served well by a Set. Please change this interface to return a Set. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19678) HBase Admin security capabilities should be represented as a Set
[ https://issues.apache.org/jira/browse/HBASE-19678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307564#comment-16307564 ] BELUGA BEHR edited comment on HBASE-19678 at 1/1/18 10:05 PM: -- As mentioned in [HBASE-19679], this is actually still open. I accidentally posted my patch for a different issue under this ticket. was (Author: belugabehr): As mentioned in [HBASE-19679], this is actually still open. I just posted my patch for a different issue under this ticket. > HBase Admin security capabilities should be represented as a Set > > > Key: HBASE-19678 > URL: https://issues.apache.org/jira/browse/HBASE-19678 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19678.1.patch > > > {code:title=org.apache.hadoop.hbase.client.Admin} > /** >* Return the set of supported security capabilities. >* @throws IOException >* @throws UnsupportedOperationException >*/ > List getSecurityCapabilities() throws IOException; > {code} > The comment says a "set" but it returns a List. A Set would be the most > appropriate data structure here, an immutable one perhaps, because the code > that interacts with it looks up information using the _contains_ method which > would be served well by a Set. Please change this interface to return a Set. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19675) Miscellaneous HStore Class Improvements
[ https://issues.apache.org/jira/browse/HBASE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307566#comment-16307566 ] BELUGA BEHR commented on HBASE-19675: - Please consider accepting this patch while I work on [HBASE-19682] > Miscellaneous HStore Class Improvements > --- > > Key: HBASE-19675 > URL: https://issues.apache.org/jira/browse/HBASE-19675 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19675.1.patch, HBASE-19675.2.patch > > > * Remove logging code guards in favor of slf4j parameters > * Use {{CollectionsUtils.isEmpty()}} consistently > * Small check-style fixes > * Remove flow control logic from {{trace}} statement {code} if > (LOG.isTraceEnabled()) { > LOG.trace("No compacted files to archive"); > return; > }{code} > * Replace two calls to the same getter to ensure that the value doesn't > change between calls {code} if (getCompactedFiles() != null) { > for (HStoreFile file : getCompactedFiles()) { > name2File.put(file.getFileInfo().getActiveFileName(), file); > } > }{code} > * Make 'inputFiles' a Set for fast calls to {{contains}} method instead > {code}//some of the input files might already be deleted > List inputStoreFiles = new > ArrayList<>(compactionInputs.size()); > for (HStoreFile sf : this.getStorefiles()) { > if (inputFiles.contains(sf.getPath().getName())) { > inputStoreFiles.add(sf); > } > }{code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19679) Superusers.java Logging and Data Structures
[ https://issues.apache.org/jira/browse/HBASE-19679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307555#comment-16307555 ] BELUGA BEHR commented on HBASE-19679: - OK, so I made a mistake here. I posted the same patch under two different tickets. The ticket [HBASE-19678] should be re-opened as that ticket points out a larger structural issue. This ticket should be closed because of the patch that was submitted as part of [HBASE-19678] is a duplicate of the one provided here and was already applied. > Superusers.java Logging and Data Structures > --- > > Key: HBASE-19679 > URL: https://issues.apache.org/jira/browse/HBASE-19679 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19679.1.patch > > > * Use Sets instead of List for search efficiency reasons > * Improve logging -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19678) HBase Admin security capabilities should be represented as a Set
[ https://issues.apache.org/jira/browse/HBASE-19678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307592#comment-16307592 ] Hudson commented on HBASE-19678: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4327 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4327/]) HBASE-19678 HBase Admin security capabilities should be represented as a (tedyu: rev 73ab51e9460f369abcaf52fa85258781f8a9a30e) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/security/Superusers.java > HBase Admin security capabilities should be represented as a Set > > > Key: HBASE-19678 > URL: https://issues.apache.org/jira/browse/HBASE-19678 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19678.1.patch > > > {code:title=org.apache.hadoop.hbase.client.Admin} > /** >* Return the set of supported security capabilities. >* @throws IOException >* @throws UnsupportedOperationException >*/ > List getSecurityCapabilities() throws IOException; > {code} > The comment says a "set" but it returns a List. A Set would be the most > appropriate data structure here, an immutable one perhaps, because the code > that interacts with it looks up information using the _contains_ method which > would be served well by a Set. Please change this interface to return a Set. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19684) BlockCacheKey toString Performance
[ https://issues.apache.org/jira/browse/HBASE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307586#comment-16307586 ] Duo Zhang commented on HBASE-19684: --- Will it be more faster just with hfileName + "_" + offset? It will use StringBuilder so will not lead to multiple String creation. > BlockCacheKey toString Performance > -- > > Key: HBASE-19684 > URL: https://issues.apache.org/jira/browse/HBASE-19684 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19684.1.patch > > > {code:titile=BlockCacheKey.java} > @Override > public String toString() { > return String.format("%s_%d", hfileName, offset); > } > {code} > I found through bench-marking that the following code is 10x faster. > {code:titi\le=BlockCacheKey.java} > @Override > public String toString() { > return hfileName.concat("_").concat(Long.toString(offset)); > } > {code} > Normally it wouldn't matter for a _toString()_ method, but this is comes into > play because {{MemcachedBlockCache}} uses it. > {code:title=MemcachedBlockCache.java} > @Override > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf) { > if (buf instanceof HFileBlock) { > client.add(cacheKey.toString(), MAX_SIZE, (HFileBlock) buf, tc); > } else { > if (LOG.isDebugEnabled()) { > LOG.debug("MemcachedBlockCache can not cache Cacheable's of type " > + buf.getClass().toString()); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19683: Status: Open (was: Patch Available) > Remove Superfluous Methods From String Class > > > Key: HBASE-19683 > URL: https://issues.apache.org/jira/browse/HBASE-19683 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19683.1.patch, HBASE-19683.2.patch > > > * Remove isEmpty method > * Remove repeat > Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19682) Use Collections.emptyList() For Empty List Values
BELUGA BEHR created HBASE-19682: --- Summary: Use Collections.emptyList() For Empty List Values Key: HBASE-19682 URL: https://issues.apache.org/jira/browse/HBASE-19682 Project: HBase Issue Type: Improvement Components: hbase Affects Versions: 3.0.0 Reporter: BELUGA BEHR Priority: Minor Use {{Collection.emptyList()}} for returning an empty list instead of {{return new ArrayList<> ()}}. The default constructor creates a buffer of size 10 for _ArrayList_ therefore, returning this static value saves on some memory and GC pressure and saves time not having to allocate a new internally buffer for each instantiation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19682) Use Collections.emptyList() For Empty List Values
[ https://issues.apache.org/jira/browse/HBASE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19682: Attachment: HBASE-19682.1.patch > Use Collections.emptyList() For Empty List Values > - > > Key: HBASE-19682 > URL: https://issues.apache.org/jira/browse/HBASE-19682 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19682.1.patch > > > Use {{Collection.emptyList()}} for returning an empty list instead of > {{return new ArrayList<> ()}}. The default constructor creates a buffer of > size 10 for _ArrayList_ therefore, returning this static value saves on some > memory and GC pressure and saves time not having to allocate a new internally > buffer for each instantiation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19684) BlockCacheKey toString Performance
BELUGA BEHR created HBASE-19684: --- Summary: BlockCacheKey toString Performance Key: HBASE-19684 URL: https://issues.apache.org/jira/browse/HBASE-19684 Project: HBase Issue Type: Improvement Components: hbase Affects Versions: 3.0.0 Reporter: BELUGA BEHR Priority: Trivial {code:titile=BlockCacheKey.java} @Override public String toString() { return String.format("%s_%d", hfileName, offset); } {code} I found through bench-marking that the following code is 10x faster. {code:titi\le=BlockCacheKey.java} @Override public String toString() { return hfileName.concat("_").concat(Long.toString(offset)); } {code} Normally it wouldn't matter for a _toString()_ method, but this is comes into play because {{MemcachedBlockCache}} uses it. {code:title=MemcachedBlockCache.java} @Override public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf) { if (buf instanceof HFileBlock) { client.add(cacheKey.toString(), MAX_SIZE, (HFileBlock) buf, tc); } else { if (LOG.isDebugEnabled()) { LOG.debug("MemcachedBlockCache can not cache Cacheable's of type " + buf.getClass().toString()); } } } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19633) Clean up the replication queues in the postPeerModification stage when removing a peer
[ https://issues.apache.org/jira/browse/HBASE-19633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307594#comment-16307594 ] Duo Zhang commented on HBASE-19633: --- Let me do a rebase and then commit. There are lots of big changes on master already... > Clean up the replication queues in the postPeerModification stage when > removing a peer > -- > > Key: HBASE-19633 > URL: https://issues.apache.org/jira/browse/HBASE-19633 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HBASE-19633-HBASE-19397-v1.patch, > HBASE-19633-HBASE-19397-v2.patch, HBASE-19633-HBASE-19397-v3.patch, > HBASE-19633-HBASE-19397.patch > > > In the previous implementation, we can not always cleanly remove all the > replication queues when removing a peer since the removing work is done by RS > and if an RS is crashed then some queues may left there forever. That's why > we need to check if there are already some queues for a newly created peer > since we may reuse the peer id and causes problem. > With the new procedure based replication peer modification, I think we can do > it cleanly. After the RefreshPeerProcedures are done on all RSes, we can make > sure that no RS will create queue for this peer again, then we can iterate > over all the queues for all Rses and do another round of clean up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19369) HBase Should use Builder Pattern to Create Log Files while using WAL on Erasure Coding
[ https://issues.apache.org/jira/browse/HBASE-19369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307588#comment-16307588 ] Duo Zhang commented on HBASE-19369: --- {quote} EC doesn't support hsync or hflush. {quote} Does this means EC will write data directly to DN even if you do not call hsync or hflush? I guess not... I think the implementation is that we will buffer data locally and then calculate the EC block when full, and then write different block to different DNs. Then it can not be used by WAL... > HBase Should use Builder Pattern to Create Log Files while using WAL on > Erasure Coding > -- > > Key: HBASE-19369 > URL: https://issues.apache.org/jira/browse/HBASE-19369 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Alex Leblang >Assignee: Alex Leblang > Attachments: HBASE-19369.master.001.patch, > HBASE-19369.master.002.patch, HBASE-19369.master.003.patch, > HBASE-19369.master.004.patch, HBASE-19369.v5.patch, HBASE-19369.v6.patch, > HBASE-19369.v7.patch, HBASE-19369.v8.patch > > > Right now an HBase instance using the WAL won't function properly in an > Erasure Coded environment. We should change the following line to use the > hdfs.DistributedFileSystem builder pattern > https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java#L92 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19633) Clean up the replication queues in the postPeerModification stage when removing a peer
[ https://issues.apache.org/jira/browse/HBASE-19633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19633: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HBASE-19397 Status: Resolved (was: Patch Available) Rebased and pushed to branch HBASE-19397. Thanks [~zghaobac] for reviewing. > Clean up the replication queues in the postPeerModification stage when > removing a peer > -- > > Key: HBASE-19633 > URL: https://issues.apache.org/jira/browse/HBASE-19633 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: HBASE-19397 > > Attachments: HBASE-19633-HBASE-19397-v1.patch, > HBASE-19633-HBASE-19397-v2.patch, HBASE-19633-HBASE-19397-v3.patch, > HBASE-19633-HBASE-19397.patch > > > In the previous implementation, we can not always cleanly remove all the > replication queues when removing a peer since the removing work is done by RS > and if an RS is crashed then some queues may left there forever. That's why > we need to check if there are already some queues for a newly created peer > since we may reuse the peer id and causes problem. > With the new procedure based replication peer modification, I think we can do > it cleanly. After the RefreshPeerProcedures are done on all RSes, we can make > sure that no RS will create queue for this peer again, then we can iterate > over all the queues for all Rses and do another round of clean up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307579#comment-16307579 ] Hadoop QA commented on HBASE-19683: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 28s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} hbase-common: The patch generated 0 new + 10 unchanged - 1 fixed = 10 total (was 11) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} hbase-client: The patch generated 0 new + 10 unchanged - 1 fixed = 10 total (was 11) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 47s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19683 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904140/HBASE-19683.2.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux b5e38c63c6a8 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personalit
[jira] [Updated] (HBASE-19684) BlockCacheKey toString Performance
[ https://issues.apache.org/jira/browse/HBASE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19684: Attachment: HBASE-19684.1.patch > BlockCacheKey toString Performance > -- > > Key: HBASE-19684 > URL: https://issues.apache.org/jira/browse/HBASE-19684 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19684.1.patch > > > {code:titile=BlockCacheKey.java} > @Override > public String toString() { > return String.format("%s_%d", hfileName, offset); > } > {code} > I found through bench-marking that the following code is 10x faster. > {code:titi\le=BlockCacheKey.java} > @Override > public String toString() { > return hfileName.concat("_").concat(Long.toString(offset)); > } > {code} > Normally it wouldn't matter for a _toString()_ method, but this is comes into > play because {{MemcachedBlockCache}} uses it. > {code:title=MemcachedBlockCache.java} > @Override > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf) { > if (buf instanceof HFileBlock) { > client.add(cacheKey.toString(), MAX_SIZE, (HFileBlock) buf, tc); > } else { > if (LOG.isDebugEnabled()) { > LOG.debug("MemcachedBlockCache can not cache Cacheable's of type " > + buf.getClass().toString()); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19588) Additional jar dependencies needed for mapreduce PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-19588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307537#comment-16307537 ] stack commented on HBASE-19588: --- +1 thanks [~chu11]. Will commit[~apurtell] FYI. > Additional jar dependencies needed for mapreduce PerformanceEvaluation > -- > > Key: HBASE-19588 > URL: https://issues.apache.org/jira/browse/HBASE-19588 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.4.0 >Reporter: Albert Chu >Priority: Minor > Attachments: HBASE-19588.branch-1.4.patch > > > I have a unit test that runs a simple PerformanceEvaluation test to make sure > things are basically working > {noformat} > bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=5 > sequentialWrite 1 > {noformat} > This test runs against Hadoop 2.7.0 and works against all past versions > 0.99.0 and up. It broke with 1.4.0 with the following error. > {noformat} > 2017-12-21 13:49:40,974 INFO [main] mapreduce.Job: Task Id : > attempt_1513892752187_0002_m_04_2, Status : FAILED > Error: java.io.IOException: java.lang.reflect.InvocationTargetException > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:297) > at > org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:250) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) > ... 12 more > Caused by: java.lang.RuntimeException: Could not create interface > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop > compatibility jar on the classpath? > at > org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:75) > at > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeper.(MetricsZooKeeper.java:38) > at > org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:130) > at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:143) > at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:181) > at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:155) > at > org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.(ZooKeeperKeepAliveConnection.java:43) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1737) > at > org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:945) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:721) > ... 17 more > Caused by: java.util.ServiceConfigurationError: > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider > org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be > instantiated > at java.util.ServiceLoader.fail(ServiceLoader.java:224) > at java.util.ServiceLoader.access$100(ServiceLoader.java:181) > at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377) > at java.util.ServiceLoader$1.next(ServiceLoader.java:445) > at > org.apache.hadoop.hbase.CompatibilitySinglet
[jira] [Updated] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HBASE-19683: Attachment: HBASE-19683.2.patch > Remove Superfluous Methods From String Class > > > Key: HBASE-19683 > URL: https://issues.apache.org/jira/browse/HBASE-19683 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19683.1.patch, HBASE-19683.2.patch > > > * Remove isEmpty method > * Remove repeat > Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19675) Miscellaneous HStore Class Improvements
[ https://issues.apache.org/jira/browse/HBASE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307565#comment-16307565 ] BELUGA BEHR edited comment on HBASE-19675 at 1/1/18 10:09 PM: -- [~chia7712] [HBASE-19682] was (Author: belugabehr): [~chia7712] [HBASE-19675] > Miscellaneous HStore Class Improvements > --- > > Key: HBASE-19675 > URL: https://issues.apache.org/jira/browse/HBASE-19675 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19675.1.patch, HBASE-19675.2.patch > > > * Remove logging code guards in favor of slf4j parameters > * Use {{CollectionsUtils.isEmpty()}} consistently > * Small check-style fixes > * Remove flow control logic from {{trace}} statement {code} if > (LOG.isTraceEnabled()) { > LOG.trace("No compacted files to archive"); > return; > }{code} > * Replace two calls to the same getter to ensure that the value doesn't > change between calls {code} if (getCompactedFiles() != null) { > for (HStoreFile file : getCompactedFiles()) { > name2File.put(file.getFileInfo().getActiveFileName(), file); > } > }{code} > * Make 'inputFiles' a Set for fast calls to {{contains}} method instead > {code}//some of the input files might already be deleted > List inputStoreFiles = new > ArrayList<>(compactionInputs.size()); > for (HStoreFile sf : this.getStorefiles()) { > if (inputFiles.contains(sf.getPath().getName())) { > inputStoreFiles.add(sf); > } > }{code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19623) Create replication endpoint asynchronously when adding a replication source
[ https://issues.apache.org/jira/browse/HBASE-19623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19623: -- Attachment: HBASE-19623-HBASE-19397-v1.patch > Create replication endpoint asynchronously when adding a replication source > --- > > Key: HBASE-19623 > URL: https://issues.apache.org/jira/browse/HBASE-19623 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Duo Zhang > Attachments: HBASE-19623-HBASE-19397-v1.patch, > HBASE-19623-HBASE-19397.patch > > > As the discussion in HBASE-19617, After the replication procedure replace > the zookeeper notification , the addPeer operation may be blocked because > the RegionServer will create a connection to peer cluster synchronously. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-19683) Remove Superfluous Methods From String Class
[ https://issues.apache.org/jira/browse/HBASE-19683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR reassigned HBASE-19683: --- Assignee: BELUGA BEHR > Remove Superfluous Methods From String Class > > > Key: HBASE-19683 > URL: https://issues.apache.org/jira/browse/HBASE-19683 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HBASE-19683.1.patch > > > * Remove isEmpty method > * Remove repeat > Use the Apache Commons implementations instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19654) ReplicationLogCleaner should not delete MasterProcedureWALs
[ https://issues.apache.org/jira/browse/HBASE-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307631#comment-16307631 ] Reid Chan commented on HBASE-19654: --- Yea, it is misleading, but {{ReplicationLogCleaner}} should not return true for an unrelated log. Add a pattern match at the beginning would be better. > ReplicationLogCleaner should not delete MasterProcedureWALs > --- > > Key: HBASE-19654 > URL: https://issues.apache.org/jira/browse/HBASE-19654 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Peter Somogyi >Assignee: Reid Chan > Fix For: 2.0.0-beta-2 > > > The pv2 logs are deleted by ReplicationLogCleaner. It does not check if > TimeToLiveProcedureWALCleaner needs to keep the files. > {noformat} > 2017-12-27 19:59:02,261 DEBUG [ForkJoinPool-1-worker-17] > cleaner.CleanerChore: CleanerTask 391 starts cleaning dirs and files under > hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs and itself. > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0001.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0002.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0003.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0004.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0005.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0006.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0007.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0008.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0009.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0010.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0011.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0012.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0013.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0014.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0015.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0016.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0017.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0018.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0019.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0020.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0021.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0022.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0023.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCl
[jira] [Commented] (HBASE-19650) ExpiredMobFileCleaner has wrong logic about TTL check
[ https://issues.apache.org/jira/browse/HBASE-19650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307632#comment-16307632 ] Jingcheng Du commented on HBASE-19650: -- Hi [~javaman_chen], would you mind providing a new patch? And please don't change the old test case if it doesn't go to wrong. If necessary you can add a simple test that addresses your case. Thanks. > ExpiredMobFileCleaner has wrong logic about TTL check > - > > Key: HBASE-19650 > URL: https://issues.apache.org/jira/browse/HBASE-19650 > Project: HBase > Issue Type: Bug > Components: mob >Reporter: chenxu >Assignee: chenxu > Attachments: HBASE-19650-master-v1.patch > > > If today is 2017-12-28 00:00:01, and TTL is set to 86400, when > MobUtils.cleanExpiredMobFiles execute, expireDate will be 1514304000749, but > fileDate is 151430400. So the fileDate before the expireDate and mobfiles > generated in is 2017-12-27 will all are deleted. But in fact, we want to > delete files before 2017-12-27. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19654) ReplicationLogCleaner should not delete MasterProcedureWALs
[ https://issues.apache.org/jira/browse/HBASE-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307643#comment-16307643 ] Ted Yu commented on HBASE-19654: What if the proc WAL files are stored under a directory other than oldWALs ? It seems cleaning would be simpler to handle that way. > ReplicationLogCleaner should not delete MasterProcedureWALs > --- > > Key: HBASE-19654 > URL: https://issues.apache.org/jira/browse/HBASE-19654 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Peter Somogyi >Assignee: Reid Chan > Fix For: 2.0.0-beta-2 > > > The pv2 logs are deleted by ReplicationLogCleaner. It does not check if > TimeToLiveProcedureWALCleaner needs to keep the files. > {noformat} > 2017-12-27 19:59:02,261 DEBUG [ForkJoinPool-1-worker-17] > cleaner.CleanerChore: CleanerTask 391 starts cleaning dirs and files under > hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs and itself. > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0001.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0002.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0003.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0004.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0005.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0006.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0007.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0008.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0009.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0010.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0011.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0012.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0013.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0014.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0015.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0016.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0017.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0018.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0019.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0020.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0021.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0022.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0023.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log
[jira] [Created] (HBASE-19686) Introduce a IdLock which can lock a String to protect the peer refresh operation at RS side
Duo Zhang created HBASE-19686: - Summary: Introduce a IdLock which can lock a String to protect the peer refresh operation at RS side Key: HBASE-19686 URL: https://issues.apache.org/jira/browse/HBASE-19686 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang Assignee: Duo Zhang -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19686) Introduce a IdLock which can lock a String to protect the peer refresh operation at RS side
[ https://issues.apache.org/jira/browse/HBASE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307666#comment-16307666 ] Duo Zhang commented on HBASE-19686: --- Oh we already have a KeyLocker class. We can use it directly... > Introduce a IdLock which can lock a String to protect the peer refresh > operation at RS side > --- > > Key: HBASE-19686 > URL: https://issues.apache.org/jira/browse/HBASE-19686 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19686) Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl
[ https://issues.apache.org/jira/browse/HBASE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19686: -- Summary: Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl (was: Introduce a IdLock which can lock a String to protect the peer refresh operation at RS side) > Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl > -- > > Key: HBASE-19686 > URL: https://issues.apache.org/jira/browse/HBASE-19686 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19686) Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl
[ https://issues.apache.org/jira/browse/HBASE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19686: -- Status: Patch Available (was: Open) > Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl > -- > > Key: HBASE-19686 > URL: https://issues.apache.org/jira/browse/HBASE-19686 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HBASE-19686-HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19682) Use Collections.emptyList() For Empty List Values
[ https://issues.apache.org/jira/browse/HBASE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307671#comment-16307671 ] Chia-Ping Tsai edited comment on HBASE-19682 at 1/2/18 4:03 AM: Please check the following classes WALSplitter#1680 Canary#1243 BaseLoadBalancer#1555 HStore#523 was (Author: chia7712): WALSplitter#1680 Canary#1243 BaseLoadBalancer#1555 HStore#523 > Use Collections.emptyList() For Empty List Values > - > > Key: HBASE-19682 > URL: https://issues.apache.org/jira/browse/HBASE-19682 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19682.1.patch > > > Use {{Collection.emptyList()}} for returning an empty list instead of > {{return new ArrayList<> ()}}. The default constructor creates a buffer of > size 10 for _ArrayList_ therefore, returning this static value saves on some > memory and GC pressure and saves time not having to allocate a new internally > buffer for each instantiation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19686) Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl
[ https://issues.apache.org/jira/browse/HBASE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19686: -- Attachment: HBASE-19686-HBASE-19397.patch > Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl > -- > > Key: HBASE-19686 > URL: https://issues.apache.org/jira/browse/HBASE-19686 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HBASE-19686-HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19682) Use Collections.emptyList() For Empty List Values
[ https://issues.apache.org/jira/browse/HBASE-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307671#comment-16307671 ] Chia-Ping Tsai commented on HBASE-19682: WALSplitter#1680 Canary#1243 BaseLoadBalancer#1555 HStore#523 > Use Collections.emptyList() For Empty List Values > - > > Key: HBASE-19682 > URL: https://issues.apache.org/jira/browse/HBASE-19682 > Project: HBase > Issue Type: Improvement > Components: hbase >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Priority: Minor > Attachments: HBASE-19682.1.patch > > > Use {{Collection.emptyList()}} for returning an empty list instead of > {{return new ArrayList<> ()}}. The default constructor creates a buffer of > size 10 for _ArrayList_ therefore, returning this static value saves on some > memory and GC pressure and saves time not having to allocate a new internally > buffer for each instantiation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19671) Fix TestMultiParallel#testActiveThreadsCount
[ https://issues.apache.org/jira/browse/HBASE-19671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307672#comment-16307672 ] Ted Yu commented on HBASE-19671: lgtm > Fix TestMultiParallel#testActiveThreadsCount > > > Key: HBASE-19671 > URL: https://issues.apache.org/jira/browse/HBASE-19671 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-19671.v0.patch > > > {code} > java.lang.AssertionError: expected:<4> but was:<5> > at > org.apache.hadoop.hbase.client.TestMultiParallel.testActiveThreadsCount(TestMultiParallel.java:168) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19685) Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors
[ https://issues.apache.org/jira/browse/HBASE-19685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19685: --- Status: Patch Available (was: Open) > Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors > - > > Key: HBASE-19685 > URL: https://issues.apache.org/jira/browse/HBASE-19685 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19685.v0.patch > > > {code} > java.lang.AssertionError > at > org.apache.hadoop.hbase.regionserver.TestFSErrorsExposed.testFullSystemBubblesFSErrors(TestFSErrorsExposed.java:221) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19685) Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors
[ https://issues.apache.org/jira/browse/HBASE-19685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19685: --- Attachment: HBASE-19685.v0.patch The timeout is too lower to get the correct exception from server. The fix is to increase the timeout from 60s to 90s. Looped the test case with new setting 20 times. All pass. > Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors > - > > Key: HBASE-19685 > URL: https://issues.apache.org/jira/browse/HBASE-19685 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19685.v0.patch > > > {code} > java.lang.AssertionError > at > org.apache.hadoop.hbase.regionserver.TestFSErrorsExposed.testFullSystemBubblesFSErrors(TestFSErrorsExposed.java:221) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19654) ReplicationLogCleaner should not delete MasterProcedureWALs
[ https://issues.apache.org/jira/browse/HBASE-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307678#comment-16307678 ] Reid Chan commented on HBASE-19654: --- It is much simpler, but all old logs are gathered under oldWALs, i'm not sure it should be particularly treated. Or we can go deeper (in {{CleanerChore}}) to solve it, without changing any code in {{ReplicationLogCleaner}}, or any other unknown {{***Cleaner}} behaves wrongly either. > ReplicationLogCleaner should not delete MasterProcedureWALs > --- > > Key: HBASE-19654 > URL: https://issues.apache.org/jira/browse/HBASE-19654 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Peter Somogyi >Assignee: Reid Chan > Fix For: 2.0.0-beta-2 > > > The pv2 logs are deleted by ReplicationLogCleaner. It does not check if > TimeToLiveProcedureWALCleaner needs to keep the files. > {noformat} > 2017-12-27 19:59:02,261 DEBUG [ForkJoinPool-1-worker-17] > cleaner.CleanerChore: CleanerTask 391 starts cleaning dirs and files under > hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs and itself. > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0001.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0002.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0003.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0004.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0005.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0006.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0007.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0008.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0009.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0010.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0011.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0012.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0013.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0014.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0015.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0016.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0017.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0018.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0019.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0020.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0021.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0022.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log
[jira] [Commented] (HBASE-19623) Create replication endpoint asynchronously when adding a replication source
[ https://issues.apache.org/jira/browse/HBASE-19623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307697#comment-16307697 ] Hadoop QA commented on HBASE-19623: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} HBASE-19397 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 35s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 52s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} HBASE-19397 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} hbase-replication: The patch generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s{color} | {color:red} hbase-server: The patch generated 1 new + 38 unchanged - 4 fixed = 39 total (was 42) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 13s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s{color} | {color:red} hbase-server generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hbase-replication in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 42s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19623 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904143/HBASE-19623-HBASE-19397-v1.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 8bdb8c377130 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommi
[jira] [Commented] (HBASE-19623) Create replication endpoint asynchronously when adding a replication source
[ https://issues.apache.org/jira/browse/HBASE-19623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307700#comment-16307700 ] Duo Zhang commented on HBASE-19623: --- Will fix the checkstyle and javadoc issues when committing. > Create replication endpoint asynchronously when adding a replication source > --- > > Key: HBASE-19623 > URL: https://issues.apache.org/jira/browse/HBASE-19623 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Duo Zhang > Attachments: HBASE-19623-HBASE-19397-v1.patch, > HBASE-19623-HBASE-19397.patch > > > As the discussion in HBASE-19617, After the replication procedure replace > the zookeeper notification , the addPeer operation may be blocked because > the RegionServer will create a connection to peer cluster synchronously. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19623) Create replication endpoint asynchronously when adding a replication source
[ https://issues.apache.org/jira/browse/HBASE-19623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19623: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HBASE-19397 Status: Resolved (was: Patch Available) Pushed to branch HBASE-19397. Thanks [~zghaobac] for reviewing. > Create replication endpoint asynchronously when adding a replication source > --- > > Key: HBASE-19623 > URL: https://issues.apache.org/jira/browse/HBASE-19623 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Duo Zhang > Fix For: HBASE-19397 > > Attachments: HBASE-19623-HBASE-19397-v1.patch, > HBASE-19623-HBASE-19397.patch > > > As the discussion in HBASE-19617, After the replication procedure replace > the zookeeper notification , the addPeer operation may be blocked because > the RegionServer will create a connection to peer cluster synchronously. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19632) Too many MasterProcWals state log
[ https://issues.apache.org/jira/browse/HBASE-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307704#comment-16307704 ] jackylau commented on HBASE-19632: -- @Danny, do you know how to resolve it ? > Too many MasterProcWals state log > - > > Key: HBASE-19632 > URL: https://issues.apache.org/jira/browse/HBASE-19632 > Project: HBase > Issue Type: Bug > Components: proc-v2 >Affects Versions: 1.2.0 >Reporter: jackylau > Fix For: 1.2.0 > > Attachments: hbase-root-master-keping001.log.10, > state-2299.log > > Original Estimate: 336h > Remaining Estimate: 336h > > There are too many MasterProcWALs state log,which size is about 3.5T with 3 > nodes when we found it. > I have some logs and I think it is a problem of WALProcedureStore. > The hbase code WALProcedureStore.initOldLogs() gets the max id file is 2298 > and MasterProcWALs directory only one file at that time, but when it starts > rollwrite, the file 2299 is also created. i don't konw why. Maybe it has some > other place will call the function. But i found nothing. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19661) Replace ReplicationStateZKBase with ZKReplicationStorageBase
[ https://issues.apache.org/jira/browse/HBASE-19661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307705#comment-16307705 ] Duo Zhang commented on HBASE-19661: --- I think we can split this issue to several small pieces. For example, remove the usage of ReplicationStateZKBase for ReplicationTrackerImpl, remove the usage of ReplicationStateZKBase for ZNodeCleaner, etc. What do you think [~openinx]? This would make us move faster.I think you can focus on the ReplicationTracker in this issue first to make the patch smaller and easy to be ready for committing. Thanks. > Replace ReplicationStateZKBase with ZKReplicationStorageBase > > > Key: HBASE-19661 > URL: https://issues.apache.org/jira/browse/HBASE-19661 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: HBASE-19397 > > Attachments: HBASE-19661.v1.HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19687) Reimplement ReplicationZKNodeCleaner to remove the usage of ReplicationStateZKBase
Duo Zhang created HBASE-19687: - Summary: Reimplement ReplicationZKNodeCleaner to remove the usage of ReplicationStateZKBase Key: HBASE-19687 URL: https://issues.apache.org/jira/browse/HBASE-19687 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang Assignee: Duo Zhang -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19687) Reimplement ReplicationZKNodeCleaner to remove the usage of ReplicationStateZKBase
[ https://issues.apache.org/jira/browse/HBASE-19687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307706#comment-16307706 ] Duo Zhang commented on HBASE-19687: --- Or we can just remove the cleaner chore since HBASE-19633 has already been committed? Maybe we can move this to the hbck tool. > Reimplement ReplicationZKNodeCleaner to remove the usage of > ReplicationStateZKBase > -- > > Key: HBASE-19687 > URL: https://issues.apache.org/jira/browse/HBASE-19687 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19661) Replace ReplicationStateZKBase with ZKReplicationStorageBase
[ https://issues.apache.org/jira/browse/HBASE-19661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307716#comment-16307716 ] Zheng Hu commented on HBASE-19661: -- OK, seems like you've created HBASE-19687 for ZNodeCleaner, Let me focus on this issue. > Replace ReplicationStateZKBase with ZKReplicationStorageBase > > > Key: HBASE-19661 > URL: https://issues.apache.org/jira/browse/HBASE-19661 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: HBASE-19397 > > Attachments: HBASE-19661.v1.HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Attachment: HBASE-18619-branch-2-v2.patch > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-18619-branch-2.patch, HBASE-19358-branch-1-v2.patch, > HBASE-19358-branch-1-v3.patch, HBASE-19358-branch-1.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12902997/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12902998/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19686) Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl
[ https://issues.apache.org/jira/browse/HBASE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307730#comment-16307730 ] Zheng Hu commented on HBASE-19686: -- +1 > Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl > -- > > Key: HBASE-19686 > URL: https://issues.apache.org/jira/browse/HBASE-19686 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HBASE-19686-HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19654) ReplicationLogCleaner should not delete MasterProcedureWALs
[ https://issues.apache.org/jira/browse/HBASE-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307737#comment-16307737 ] Reid Chan commented on HBASE-19654: --- Or as [~tedyu] said, but not only {{ReplicationLogCleaner}}, but also others, then oldWALs's structure should become following: {{code}} /oldWALs /replicationLog /masterProc /backupLog ... {{code}} each cleaner just clean its own dir. > ReplicationLogCleaner should not delete MasterProcedureWALs > --- > > Key: HBASE-19654 > URL: https://issues.apache.org/jira/browse/HBASE-19654 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Peter Somogyi >Assignee: Reid Chan > Fix For: 2.0.0-beta-2 > > > The pv2 logs are deleted by ReplicationLogCleaner. It does not check if > TimeToLiveProcedureWALCleaner needs to keep the files. > {noformat} > 2017-12-27 19:59:02,261 DEBUG [ForkJoinPool-1-worker-17] > cleaner.CleanerChore: CleanerTask 391 starts cleaning dirs and files under > hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs and itself. > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0001.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0002.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0003.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0004.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0005.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0006.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0007.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0008.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0009.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0010.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0011.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0012.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0013.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0014.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0015.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0016.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0017.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0018.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0019.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0020.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0021.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0022.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0
[jira] [Commented] (HBASE-19661) Replace ReplicationStateZKBase with ZKReplicationStorageBase
[ https://issues.apache.org/jira/browse/HBASE-19661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307736#comment-16307736 ] Hadoop QA commented on HBASE-19661: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} HBASE-19661 does not apply to HBASE-19397. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-19661 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904159/HBASE-19661.v2.HBASE-19397.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10836/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Replace ReplicationStateZKBase with ZKReplicationStorageBase > > > Key: HBASE-19661 > URL: https://issues.apache.org/jira/browse/HBASE-19661 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: HBASE-19397 > > Attachments: HBASE-19661.v1.HBASE-19397.patch, > HBASE-19661.v2.HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307744#comment-16307744 ] Hadoop QA commented on HBASE-19358: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 45s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} hbase-server: The patch generated 0 new + 48 unchanged - 3 fixed = 48 total (was 51) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 30s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 17m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 12s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestMemstoreLABWithoutPool | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db | | JIRA Issue | HBASE-19358 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904154/HBASE-18619-branch-2-v2.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 9ba55774581d 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-2 / 3f1cfc8f08 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/10835/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10835/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10835/console | | Powered by | Ap
[jira] [Comment Edited] (HBASE-19654) ReplicationLogCleaner should not delete MasterProcedureWALs
[ https://issues.apache.org/jira/browse/HBASE-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307737#comment-16307737 ] Reid Chan edited comment on HBASE-19654 at 1/2/18 7:11 AM: --- Or as [~tedyu] said, but not only {{ReplicationLogCleaner}}, but also others, then oldWALs's structure should become following: {code} /oldWALs /replicationLog /masterProc /backupLog ... {code} each cleaner just clean its own dir. was (Author: reidchan): Or as [~tedyu] said, but not only {{ReplicationLogCleaner}}, but also others, then oldWALs's structure should become following: {{code}} /oldWALs /replicationLog /masterProc /backupLog ... {{code}} each cleaner just clean its own dir. > ReplicationLogCleaner should not delete MasterProcedureWALs > --- > > Key: HBASE-19654 > URL: https://issues.apache.org/jira/browse/HBASE-19654 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Peter Somogyi >Assignee: Reid Chan > Fix For: 2.0.0-beta-2 > > > The pv2 logs are deleted by ReplicationLogCleaner. It does not check if > TimeToLiveProcedureWALCleaner needs to keep the files. > {noformat} > 2017-12-27 19:59:02,261 DEBUG [ForkJoinPool-1-worker-17] > cleaner.CleanerChore: CleanerTask 391 starts cleaning dirs and files under > hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs and itself. > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0001.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0002.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0003.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0004.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0005.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0006.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0007.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0008.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0009.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0010.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0011.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0012.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0013.log > 2017-12-27 19:59:02,279 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0014.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0015.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0016.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0017.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0018.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0019.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-0020.log > 2017-12-27 19:59:02,280 DEBUG [ForkJoinPool-1-worker-17] > master.ReplicationLogCleaner: Didn't find this log in ZK, deleting: > pv2-00
[jira] [Commented] (HBASE-19686) Use KeyLocker instead of ReentrantLock in PeerProcedureHandlerImpl
[ https://issues.apache.org/jira/browse/HBASE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307738#comment-16307738 ] Hadoop QA commented on HBASE-19686: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HBASE-19397 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 6s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} HBASE-19397 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 33s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} HBASE-19397 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 1s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 22m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 49s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}182m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.TestReplicationStatus | | | hadoop.hbase.replication.TestReplicationSource | | | hadoop.hbase.replication.TestReplicationDisableInactivePeer | | | hadoop.hbase.replication.TestMasterReplication | | | hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleWAL | | | hadoop.hbase.replication.TestNamespaceReplication | | | hadoop.hbase.client.replication.TestReplicationAdminWithClusters | | | hadoop.hbase.client.replication.TestReplicationAdminWithTwoDifferentZKClusters | | | hadoop.hbase.client.replication.TestReplicationAdminUsingProcedure | | | hadoop.hbase.replication.multiwal.TestReplicationEndpointWithMultipleAsyncWAL | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19686 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904145/HBASE-19686-HBASE-19397.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a273c69375c0 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2
[jira] [Commented] (HBASE-19685) Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors
[ https://issues.apache.org/jira/browse/HBASE-19685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307722#comment-16307722 ] Hadoop QA commented on HBASE-19685: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 5m 42s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 38s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 48s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19685 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904146/HBASE-19685.v0.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4547e712f2f2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 6708d54478 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/10834/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10834/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10834/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors > ---
[jira] [Updated] (HBASE-19661) Replace ReplicationStateZKBase with ZKReplicationStorageBase
[ https://issues.apache.org/jira/browse/HBASE-19661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19661: - Attachment: HBASE-19661.v2.HBASE-19397.patch > Replace ReplicationStateZKBase with ZKReplicationStorageBase > > > Key: HBASE-19661 > URL: https://issues.apache.org/jira/browse/HBASE-19661 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: HBASE-19397 > > Attachments: HBASE-19661.v1.HBASE-19397.patch, > HBASE-19661.v2.HBASE-19397.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Attachment: (was: HBASE-19358-v8.patch) > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-18619-branch-2.patch, HBASE-19358-branch-1-v2.patch, > HBASE-19358-branch-1-v3.patch, HBASE-19358-branch-1.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12902997/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12902998/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Attachment: HBASE-19358-v8.patch > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-18619-branch-2.patch, HBASE-19358-branch-1-v2.patch, > HBASE-19358-branch-1-v3.patch, HBASE-19358-branch-1.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12902997/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12902998/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19687) Move the logic in ReplicationZKNodeCleaner to ReplicationChecker and remove ReplicationZKNodeCleanerChore
[ https://issues.apache.org/jira/browse/HBASE-19687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19687: -- Summary: Move the logic in ReplicationZKNodeCleaner to ReplicationChecker and remove ReplicationZKNodeCleanerChore (was: Reimplement ReplicationZKNodeCleaner to remove the usage of ReplicationStateZKBase) > Move the logic in ReplicationZKNodeCleaner to ReplicationChecker and remove > ReplicationZKNodeCleanerChore > - > > Key: HBASE-19687 > URL: https://issues.apache.org/jira/browse/HBASE-19687 > Project: HBase > Issue Type: Sub-task > Components: proc-v2, Replication >Reporter: Duo Zhang >Assignee: Duo Zhang > -- This message was sent by Atlassian JIRA (v6.4.14#64029)