[jira] [Created] (HBASE-15755) SnapshotDescriptionUtils does not have any Interface audience marked
ramkrishna.s.vasudevan created HBASE-15755: -- Summary: SnapshotDescriptionUtils does not have any Interface audience marked Key: HBASE-15755 URL: https://issues.apache.org/jira/browse/HBASE-15755 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan SnapshotDescriptionUtils does not have any IA marked. Should this be private or public? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15714) We are calling checkRow() twice in doMiniBatchMutation()
[ https://issues.apache.org/jira/browse/HBASE-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268196#comment-15268196 ] Hadoop QA commented on HBASE-15714: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s {color} | {color:green} branch-1 passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s {color} | {color:green} branch-1 passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 4m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 118m 40s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 144m 17s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801881/HBASE-15714-branch-1.patch | | JIRA Issue | HBASE-15714 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux proserpina.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | branch-1 / 7e0e860 | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/1719/testReport/ | | mo
[jira] [Commented] (HBASE-15742) Reduce allocation of objects in metrics
[ https://issues.apache.org/jira/browse/HBASE-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268151#comment-15268151 ] Hadoop QA commented on HBASE-15742: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s {color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 8s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 17s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801901/HBASE-15742-v4.patch | | JIRA Issue | HBASE-15742 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | master / c06a976 | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 | | findbu
[jira] [Commented] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268150#comment-15268150 ] Hadoop QA commented on HBASE-15752: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 6s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 6s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 213m 2s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 278m 20s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.security.access.TestNamespaceCommands | | | org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestExportSnapshot | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801850/15752.v1.patch | | JIRA Issue | HBASE-15752 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf911.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib
[jira] [Commented] (HBASE-15609) Remove PB references from Result, DoubleColumnInterpreter and any such public facing class for 2.0
[ https://issues.apache.org/jira/browse/HBASE-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268145#comment-15268145 ] ramkrishna.s.vasudevan commented on HBASE-15609: Ping for reviews !! > Remove PB references from Result, DoubleColumnInterpreter and any such public > facing class for 2.0 > -- > > Key: HBASE-15609 > URL: https://issues.apache.org/jira/browse/HBASE-15609 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-15609.patch, HBASE-15609.patch, HBASE-15609_1.patch > > > This is a sub-task for HBASE-15174. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15738) Ensure artifacts in project dist area include required md5 file
[ https://issues.apache.org/jira/browse/HBASE-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268144#comment-15268144 ] Hadoop QA commented on HBASE-15738: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 14s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s {color} | {color:red} Patch generated 16 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 38s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801902/md5_and_sha.patch | | JIRA Issue | HBASE-15738 | | Optional Tests | asflicense | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | master / c06a976 | | asflicense | https://builds.apache.org/job/PreCommit-HBASE-Build/1722/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1722/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Ensure artifacts in project dist area include required md5 file > --- > > Key: HBASE-15738 > URL: https://issues.apache.org/jira/browse/HBASE-15738 > Project: HBase > Issue Type: Bug > Components: build, community >Reporter: Sean Busbey >Assignee: Nick Dimiduk >Priority: Blocker > Fix For: 2.0.0, 1.3.0, 1.1.5, 1.2.2, 0.98.20 > > Attachments: HBASE-15738.v00.patch, md5_and_sha.patch > > > From the 0.98.19RC0 thread: > [~busbey] > {quote} > [1]: ASF policy requires that each file hosted in the project dist > space have a file with _just_ the MD5 sum in a file named after the > original with ".md5" as a suffix. (Having an additional file with all > the checksums is a good practice, IMO.) I brought this up in our last > round of RCs as well. I don't want to hold up this vote, but I plan to > start voting -1 on future RCs that don't include md5 files. > relevant policy: > http://www.apache.org/dev/release-distribution.html#sigs-and-sums > {quote} > [~apurtell] > {quote} > Our release documentation (https://hbase.apache.org/book.html#releasing) > says we should generate sums like so: > for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done > The make_rc.sh script also encodes the same. Let's fix. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15607) Remove PB references from Admin for 2.0
[ https://issues.apache.org/jira/browse/HBASE-15607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-15607: --- Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master. Pushed the deprecating patch to branch-1 and branch-1.3. Thanks for the reviews. > Remove PB references from Admin for 2.0 > --- > > Key: HBASE-15607 > URL: https://issues.apache.org/jira/browse/HBASE-15607 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-15607.patch, HBASE-15607_1.patch, > HBASE-15607_2.patch, HBASE-15607_3.patch, HBASE-15607_3.patch, > HBASE-15607_4.patch, HBASE-15607_4.patch, HBASE-15607_branch-1.patch > > > This is a sub-task for HBASE-15174. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268116#comment-15268116 ] Clara Xiong commented on HBASE-15337: - [~enis] I was able to build using your command. Uploaded a new patch addressing your comments. > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15337-v1.patch, HBASE-15337-v2.patch, > HBASE-15337-v3.patch, HBASE-15337.patch > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clara Xiong updated HBASE-15337: Attachment: HBASE-15337-v3.patch > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15337-v1.patch, HBASE-15337-v2.patch, > HBASE-15337-v3.patch, HBASE-15337.patch > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268110#comment-15268110 ] Hadoop QA commented on HBASE-15741: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 3s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 3m 10s {color} | {color:red} hbase-client: patch generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 28s {color} | {color:red} hbase-server: patch generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 18m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 3s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 146m 49s {color} | {color:red} hbase-server in the patch failed
[jira] [Updated] (HBASE-15738) Ensure artifacts in project dist area include required md5 file
[ https://issues.apache.org/jira/browse/HBASE-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-15738: - Attachment: md5_and_sha.patch I went through all the mds files extracted the MD5 and SHA512 lines into their own files. This is the change I've staged. Care to spot-check? I also noticed that the md5 files in 1.2.1 are of a different format than these lines. Probably generated with a different tool (openssl vs gpg perhaps?) > Ensure artifacts in project dist area include required md5 file > --- > > Key: HBASE-15738 > URL: https://issues.apache.org/jira/browse/HBASE-15738 > Project: HBase > Issue Type: Bug > Components: build, community >Reporter: Sean Busbey >Assignee: Nick Dimiduk >Priority: Blocker > Fix For: 2.0.0, 1.3.0, 1.1.5, 1.2.2, 0.98.20 > > Attachments: HBASE-15738.v00.patch, md5_and_sha.patch > > > From the 0.98.19RC0 thread: > [~busbey] > {quote} > [1]: ASF policy requires that each file hosted in the project dist > space have a file with _just_ the MD5 sum in a file named after the > original with ".md5" as a suffix. (Having an additional file with all > the checksums is a good practice, IMO.) I brought this up in our last > round of RCs as well. I don't want to hold up this vote, but I plan to > start voting -1 on future RCs that don't include md5 files. > relevant policy: > http://www.apache.org/dev/release-distribution.html#sigs-and-sums > {quote} > [~apurtell] > {quote} > Our release documentation (https://hbase.apache.org/book.html#releasing) > says we should generate sums like so: > for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done > The make_rc.sh script also encodes the same. Let's fix. > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15742) Reduce allocation of objects in metrics
[ https://issues.apache.org/jira/browse/HBASE-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Yang updated HBASE-15742: -- Attachment: HBASE-15742-v4.patch set Interns final > Reduce allocation of objects in metrics > --- > > Key: HBASE-15742 > URL: https://issues.apache.org/jira/browse/HBASE-15742 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.0, 1.2.1, 1.0.3, 1.1.4, 0.98.19 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6 > > Attachments: HBASE-15742-v1.patch, HBASE-15742-v2.patch, > HBASE-15742-v3.patch, HBASE-15742-v4.patch > > > We use JMX and o.a.h.metrics2 to do some metrics on regions, tables, region > servers and cluster. We use MetricsInfo to show the information of metrics, > and we use Interns to cache MetricsInfo objects because it won't be changed. > However, in Interns there are some static values to limit the max cached > objects. We can only cache 2010 metrics, but we have dozens of metrics for > one region and we have some RS-level metrics in each RS and all metrics for > all regions will be saved in master. So each server will have thousands of > metrics, and we can not cache most of them. When we collect metrics by JMX, > we will create many objects which can be avoid. It increases the pressure of > GC and JMX has some caching logic so the objects can not be removed > immediately which increases the pressure more. > Interns is in Hadoop project, and I think the implementation is not suitable > for HBase. Because we can not know how many MetricsInfo we have, it depends > on the number of regions. And we can not set it unlimited because we should > remove the objects whose region is split, moved, or dropped. I think we can > use Guava's cache with expireAfterAccess which is very simple and convenient. > So we can add a new Interns class in HBase project first, and put it to > upstream later. > Moreover, in MutableHistogram#snapshot we create same Strings each time, we > can create them only in the first time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268100#comment-15268100 ] Hadoop QA commented on HBASE-15752: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 48s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 117m 28s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 148m 21s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestNamespaceCommands | | | hadoop.hbase.security.access.TestAccessController3 | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801872/15752.v2.patch | | JIRA Issue | HBASE-15752 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | master / d77972f | | Default
[jira] [Commented] (HBASE-15281) Allow the FileSystem inside HFileSystem to be wrapped
[ https://issues.apache.org/jira/browse/HBASE-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268097#comment-15268097 ] Hudson commented on HBASE-15281: FAILURE: Integrated in HBase-Trunk_matrix #886 (See [https://builds.apache.org/job/HBase-Trunk_matrix/886/]) HBASE-15281 Allow the FileSystem inside HFileSystem to be wrapped (antonov: rev bbc7b903350379b3aa50b9d105ff5d43cc166134) * hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java > Allow the FileSystem inside HFileSystem to be wrapped > - > > Key: HBASE-15281 > URL: https://issues.apache.org/jira/browse/HBASE-15281 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration, hbase >Affects Versions: 2.0.0, 1.2.0 >Reporter: Rajesh Nishtala >Assignee: Rajesh Nishtala > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15281-v1.patch > > > It would be very useful for us to be able to wrap the filesystems > encapsulated by HFileSystem with other FilterFileSystems. This allows for > more detailed logging of the operations to the DFS. Internally, the data > logged from this method has allowed us to show application engineers where > there schemas are inefficient and inducing too much IO. This patch will just > allow the filesystem to be pluggable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them
[ https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268096#comment-15268096 ] Hudson commented on HBASE-15703: FAILURE: Integrated in HBase-Trunk_matrix #886 (See [https://builds.apache.org/job/HBase-Trunk_matrix/886/]) HBASE-15703 Deadline scheduler needs to return to the client info about (antonov: rev 58c4c3d1748378960446af7f70f00c481c24b9f7) * hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/AdaptiveLifoCoDelCallQueue.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CallRunner.java * hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java > Deadline scheduler needs to return to the client info about skipped calls, > not just drop them > - > > Key: HBASE-15703 > URL: https://issues.apache.org/jira/browse/HBASE-15703 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-15703-branch-1.3.v1.patch, > HBASE-15703-branch-1.3.v2.patch > > > In AdaptiveLifoCodelCallQueue we drop the calls when we think we're > overloaded, we should instead return CallDroppedException to the cleent or > something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15742) Reduce allocation of objects in metrics
[ https://issues.apache.org/jira/browse/HBASE-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268094#comment-15268094 ] Hadoop QA commented on HBASE-15742: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 19s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s {color} | {color:red} hbase-hadoop2-compat: patch generated 2 new + 31 unchanged - 0 fixed = 33 total (was 31) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s {color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 7s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 20s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801888/HBASE-15742-v3.patch | | JIRA Issue | HBASE-15742 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | master / d77972f | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk
[jira] [Work started] (HBASE-15705) Add on meta cache.
[ https://issues.apache.org/jira/browse/HBASE-15705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-15705 started by Mikhail Antonov. --- > Add on meta cache. > -- > > Key: HBASE-15705 > URL: https://issues.apache.org/jira/browse/HBASE-15705 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Mikhail Antonov > > We need to cache this stuff, and it needs to be fast. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15742) Reduce allocation of objects in metrics
[ https://issues.apache.org/jira/browse/HBASE-15742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Yang updated HBASE-15742: -- Attachment: HBASE-15742-v3.patch Not really sure the correct order of import, hope this time is right... > Reduce allocation of objects in metrics > --- > > Key: HBASE-15742 > URL: https://issues.apache.org/jira/browse/HBASE-15742 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.0, 1.2.1, 1.0.3, 1.1.4, 0.98.19 >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.0.0, 1.3.0, 1.0.4, 1.4.0, 1.2.2, 0.98.20, 1.1.6 > > Attachments: HBASE-15742-v1.patch, HBASE-15742-v2.patch, > HBASE-15742-v3.patch > > > We use JMX and o.a.h.metrics2 to do some metrics on regions, tables, region > servers and cluster. We use MetricsInfo to show the information of metrics, > and we use Interns to cache MetricsInfo objects because it won't be changed. > However, in Interns there are some static values to limit the max cached > objects. We can only cache 2010 metrics, but we have dozens of metrics for > one region and we have some RS-level metrics in each RS and all metrics for > all regions will be saved in master. So each server will have thousands of > metrics, and we can not cache most of them. When we collect metrics by JMX, > we will create many objects which can be avoid. It increases the pressure of > GC and JMX has some caching logic so the objects can not be removed > immediately which increases the pressure more. > Interns is in Hadoop project, and I think the implementation is not suitable > for HBase. Because we can not know how many MetricsInfo we have, it depends > on the number of regions. And we can not set it unlimited because we should > remove the objects whose region is split, moved, or dropped. I think we can > use Guava's cache with expireAfterAccess which is very simple and convenient. > So we can add a new Interns class in HBase project first, and put it to > upstream later. > Moreover, in MutableHistogram#snapshot we create same Strings each time, we > can create them only in the first time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15754) Add testcase for AES encryption
[ https://issues.apache.org/jira/browse/HBASE-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268032#comment-15268032 ] Duo Zhang commented on HBASE-15754: --- [~ghelmling] FYI. > Add testcase for AES encryption > --- > > Key: HBASE-15754 > URL: https://issues.apache.org/jira/browse/HBASE-15754 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-15754.patch > > > As discussed in mailing list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15754) Add testcase for AES encryption
[ https://issues.apache.org/jira/browse/HBASE-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-15754: -- Fix Version/s: 2.0.0 Affects Version/s: 2.0.0 Status: Patch Available (was: Open) > Add testcase for AES encryption > --- > > Key: HBASE-15754 > URL: https://issues.apache.org/jira/browse/HBASE-15754 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-15754.patch > > > As discussed in mailing list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15754) Add testcase for AES encryption
[ https://issues.apache.org/jira/browse/HBASE-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-15754: -- Component/s: wal > Add testcase for AES encryption > --- > > Key: HBASE-15754 > URL: https://issues.apache.org/jira/browse/HBASE-15754 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-15754.patch > > > As discussed in mailing list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15754) Add testcase for AES encryption
[ https://issues.apache.org/jira/browse/HBASE-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-15754: -- Attachment: HBASE-15754.patch We do have some bugs here... > Add testcase for AES encryption > --- > > Key: HBASE-15754 > URL: https://issues.apache.org/jira/browse/HBASE-15754 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HBASE-15754.patch > > > As discussed in mailing list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15754) Add testcase for AES encryption
Duo Zhang created HBASE-15754: - Summary: Add testcase for AES encryption Key: HBASE-15754 URL: https://issues.apache.org/jira/browse/HBASE-15754 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang Assignee: Duo Zhang As discussed in mailing list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268017#comment-15268017 ] Hudson commented on HBASE-15720: SUCCESS: Integrated in HBase-1.2 #615 (See [https://builds.apache.org/job/HBase-1.2/615/]) HBASE-15720 Print row locks at the debug dump page; addendum (chenheng: rev bf13941592ab0c947ce76cf4c353696414fc1067) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSDumpServlet.java > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them
[ https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268012#comment-15268012 ] Hudson commented on HBASE-15703: SUCCESS: Integrated in HBase-1.3-IT #644 (See [https://builds.apache.org/job/HBase-1.3-IT/644/]) HBASE-15703 Deadline scheduler needs to return to the client info about (antonov: rev 319ea27bd8f9c701c9ed5d2d94f880cdfc23dfe5) * hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/AdaptiveLifoCoDelCallQueue.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java * hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CallRunner.java > Deadline scheduler needs to return to the client info about skipped calls, > not just drop them > - > > Key: HBASE-15703 > URL: https://issues.apache.org/jira/browse/HBASE-15703 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-15703-branch-1.3.v1.patch, > HBASE-15703-branch-1.3.v2.patch > > > In AdaptiveLifoCodelCallQueue we drop the calls when we think we're > overloaded, we should instead return CallDroppedException to the cleent or > something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15281) Allow the FileSystem inside HFileSystem to be wrapped
[ https://issues.apache.org/jira/browse/HBASE-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268013#comment-15268013 ] Hudson commented on HBASE-15281: SUCCESS: Integrated in HBase-1.3-IT #644 (See [https://builds.apache.org/job/HBase-1.3-IT/644/]) HBASE-15281 Allow the FileSystem inside HFileSystem to be wrapped (antonov: rev 65eed7c54dad56b8269f27dcb9942731e9d7d85d) * hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java > Allow the FileSystem inside HFileSystem to be wrapped > - > > Key: HBASE-15281 > URL: https://issues.apache.org/jira/browse/HBASE-15281 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration, hbase >Affects Versions: 2.0.0, 1.2.0 >Reporter: Rajesh Nishtala >Assignee: Rajesh Nishtala > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15281-v1.patch > > > It would be very useful for us to be able to wrap the filesystems > encapsulated by HFileSystem with other FilterFileSystems. This allows for > more detailed logging of the operations to the DFS. Internally, the data > logged from this method has allowed us to show application engineers where > there schemas are inefficient and inducing too much IO. This patch will just > allow the filesystem to be pluggable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them
[ https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268010#comment-15268010 ] Hudson commented on HBASE-15703: SUCCESS: Integrated in HBase-1.3 #680 (See [https://builds.apache.org/job/HBase-1.3/680/]) HBASE-15703 Deadline scheduler needs to return to the client info about (antonov: rev 319ea27bd8f9c701c9ed5d2d94f880cdfc23dfe5) * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/AdaptiveLifoCoDelCallQueue.java * hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java * hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CallRunner.java > Deadline scheduler needs to return to the client info about skipped calls, > not just drop them > - > > Key: HBASE-15703 > URL: https://issues.apache.org/jira/browse/HBASE-15703 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-15703-branch-1.3.v1.patch, > HBASE-15703-branch-1.3.v2.patch > > > In AdaptiveLifoCodelCallQueue we drop the calls when we think we're > overloaded, we should instead return CallDroppedException to the cleent or > something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15281) Allow the FileSystem inside HFileSystem to be wrapped
[ https://issues.apache.org/jira/browse/HBASE-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15268011#comment-15268011 ] Hudson commented on HBASE-15281: SUCCESS: Integrated in HBase-1.3 #680 (See [https://builds.apache.org/job/HBase-1.3/680/]) HBASE-15281 Allow the FileSystem inside HFileSystem to be wrapped (antonov: rev 65eed7c54dad56b8269f27dcb9942731e9d7d85d) * hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java > Allow the FileSystem inside HFileSystem to be wrapped > - > > Key: HBASE-15281 > URL: https://issues.apache.org/jira/browse/HBASE-15281 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration, hbase >Affects Versions: 2.0.0, 1.2.0 >Reporter: Rajesh Nishtala >Assignee: Rajesh Nishtala > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15281-v1.patch > > > It would be very useful for us to be able to wrap the filesystems > encapsulated by HFileSystem with other FilterFileSystems. This allows for > more detailed logging of the operations to the DFS. Internally, the data > logged from this method has allowed us to show application engineers where > there schemas are inefficient and inducing too much IO. This patch will just > allow the filesystem to be pluggable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15281) Allow the FileSystem inside HFileSystem to be wrapped
[ https://issues.apache.org/jira/browse/HBASE-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267987#comment-15267987 ] Hudson commented on HBASE-15281: FAILURE: Integrated in HBase-1.4 #128 (See [https://builds.apache.org/job/HBase-1.4/128/]) HBASE-15281 Allow the FileSystem inside HFileSystem to be wrapped (antonov: rev 36d634d353c2193d87d60f6525647360d8a27379) * hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java > Allow the FileSystem inside HFileSystem to be wrapped > - > > Key: HBASE-15281 > URL: https://issues.apache.org/jira/browse/HBASE-15281 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration, hbase >Affects Versions: 2.0.0, 1.2.0 >Reporter: Rajesh Nishtala >Assignee: Rajesh Nishtala > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15281-v1.patch > > > It would be very useful for us to be able to wrap the filesystems > encapsulated by HFileSystem with other FilterFileSystems. This allows for > more detailed logging of the operations to the DFS. Internally, the data > logged from this method has allowed us to show application engineers where > there schemas are inefficient and inducing too much IO. This patch will just > allow the filesystem to be pluggable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them
[ https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267986#comment-15267986 ] Hudson commented on HBASE-15703: FAILURE: Integrated in HBase-1.4 #128 (See [https://builds.apache.org/job/HBase-1.4/128/]) HBASE-15703 Deadline scheduler needs to return to the client info about (antonov: rev 7e0e86072aa2f372184d017aa18555fafa4bd459) * hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ClientExceptionsUtil.java * hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CallRunner.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallDroppedException.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/AdaptiveLifoCoDelCallQueue.java * hbase-client/src/main/java/org/apache/hadoop/hbase/CallQueueTooBigException.java > Deadline scheduler needs to return to the client info about skipped calls, > not just drop them > - > > Key: HBASE-15703 > URL: https://issues.apache.org/jira/browse/HBASE-15703 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-15703-branch-1.3.v1.patch, > HBASE-15703-branch-1.3.v2.patch > > > In AdaptiveLifoCodelCallQueue we drop the calls when we think we're > overloaded, we should instead return CallDroppedException to the cleent or > something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Description: [~cartershanklin] reported the following when he tried out back / restore feature in a Phoenix enabled deployment: {code} 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot get log reader at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.lang.UnsupportedOperationException: Unable to find org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) at org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) ... 12 more Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) {code} This was due to the IndexedWALEditCodec (specified thru hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop classpath. WALPlayer should handle this situation better by adding the jar for IndexedWALEditCodec class to mapreduce job dependency. Although this was found during testing of backup / restore, the error may occur in other places where WALPlayer needs custom WAL codec for the replay. was: [~cartershanklin] reported the following when he tried out back / restore feature in a Phoenix enabled deployment: {code} 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot get log reader at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.lang.UnsupportedOperationException: Unable to find org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) at org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) at org.apache.hadoop.hbase.regionserver.wal.Pr
[jira] [Updated] (HBASE-15714) We are calling checkRow() twice in doMiniBatchMutation()
[ https://issues.apache.org/jira/browse/HBASE-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen updated HBASE-15714: -- Attachment: HBASE-15714-branch-1.patch > We are calling checkRow() twice in doMiniBatchMutation() > > > Key: HBASE-15714 > URL: https://issues.apache.org/jira/browse/HBASE-15714 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-15714-branch-1.patch, HBASE-15714.patch, > HBASE-15714_v1.patch, HBASE-15714_v2.patch > > > In {{multi()}} -> {{doMiniBatchMutation()}} code path, we end up calling > {{checkRow()}} twice, once from {{checkBatchOp()}} and once from > {{getRowLock()}}. > See [~anoop.hbase]'s comments at > https://issues.apache.org/jira/browse/HBASE-15600?focusedCommentId=15257636&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15257636. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Hadoop Flags: Reviewed Fix Version/s: 1.4.0 1.3.0 2.0.0 > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: 15752.v1.patch, 15752.v2.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267970#comment-15267970 ] Hadoop QA commented on HBASE-15337: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 12s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 37s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 16s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 182m 2s {color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 222m 27s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801811/HBASE-15337-v2.patch | | JIRA Issue | HBASE-15337 | | Optional Tests | asflicense javac javadoc unit | | uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | master / 58c4c3d | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/1714/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/1714/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/1714/console | | Powered by | Apache Yetus 0.2.1 http://yetus.apache.org | This message was automatically generated. > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15337-v1.patch, HBASE-15337-v2.patch, > HBASE-15337.patch > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15714) We are calling checkRow() twice in doMiniBatchMutation()
[ https://issues.apache.org/jira/browse/HBASE-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267957#comment-15267957 ] Heng Chen commented on HBASE-15714: --- push to master. Let me upload patch for branch-1 later. > We are calling checkRow() twice in doMiniBatchMutation() > > > Key: HBASE-15714 > URL: https://issues.apache.org/jira/browse/HBASE-15714 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-15714.patch, HBASE-15714_v1.patch, > HBASE-15714_v2.patch > > > In {{multi()}} -> {{doMiniBatchMutation()}} code path, we end up calling > {{checkRow()}} twice, once from {{checkBatchOp()}} and once from > {{getRowLock()}}. > See [~anoop.hbase]'s comments at > https://issues.apache.org/jira/browse/HBASE-15600?focusedCommentId=15257636&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15257636. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: hbaseAdmin tableExists create catalogTracker for every call
BTW, you can use d...@hbase.apache.org rather than issues@. The latter is more for emails from jira. On Mon, May 2, 2016 at 7:11 PM, Enis Söztutar wrote: > Thanks for reporting. > > In master and branch-1, this part of the code is very different and no > longer has the problem. > > Did you check the latest 0.98 code base? It may not be worth fixing at > this point. > > Enis > > On Mon, May 2, 2016 at 6:21 AM, WangYQ wrote: > >> the code : >> >> private synchronized CatalogTracker getCatalogTracker() >> throws ZooKeeperConnectionException, IOException { >> CatalogTracker ct = null; >> try { >> ct = new CatalogTracker(this.conf); >> ct.start(); >> } catch (InterruptedException e) { >> // Let it out as an IOE for now until we redo all so tolerate IEs >> Thread.currentThread().interrupt(); >> throw new IOException("Interrupted", e); >> } >> return ct; >> } >> >> >> I think we can make CatalogTracker be a object of HBaseAdmin class, can >> reduce many object create and destroy, reduce client to ZK >> >> >> >> >> >> >> >> At 2016-04-19 21:09:42, "WangYQ" wrote: >> >> in hbase 0.98.10, class "HBaseAdmin " >> line 303, method "tableExists", will create a catalogTracker for >> every call >> >> >> we can let a HBaseAdmin object use one CatalogTracker object, to reduce >> the object create, connect zk and so on >> >> >> >> >> >> >> >> >> >> > >
Re: hbaseAdmin tableExists create catalogTracker for every call
Thanks for reporting. In master and branch-1, this part of the code is very different and no longer has the problem. Did you check the latest 0.98 code base? It may not be worth fixing at this point. Enis On Mon, May 2, 2016 at 6:21 AM, WangYQ wrote: > the code : > > private synchronized CatalogTracker getCatalogTracker() > throws ZooKeeperConnectionException, IOException { > CatalogTracker ct = null; > try { > ct = new CatalogTracker(this.conf); > ct.start(); > } catch (InterruptedException e) { > // Let it out as an IOE for now until we redo all so tolerate IEs > Thread.currentThread().interrupt(); > throw new IOException("Interrupted", e); > } > return ct; > } > > > I think we can make CatalogTracker be a object of HBaseAdmin class, can > reduce many object create and destroy, reduce client to ZK > > > > > > > > At 2016-04-19 21:09:42, "WangYQ" wrote: > > in hbase 0.98.10, class "HBaseAdmin " > line 303, method "tableExists", will create a catalogTracker for > every call > > > we can let a HBaseAdmin object use one CatalogTracker object, to reduce > the object create, connect zk and so on > > > > > > > > > >
[jira] [Commented] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267933#comment-15267933 ] Enis Soztutar commented on HBASE-15752: --- +1. > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch, 15752.v2.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Attachment: 15752.v2.patch Patch v2 addresses Enis' comment > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch, 15752.v2.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267927#comment-15267927 ] Enis Soztutar commented on HBASE-15337: --- bq. Reference book to use mvn clean site -DskipTests but it would not build on my box Thanks Clara. Created HBASE-15753 for this. I was able to get a build going with: {code} mvn site -DskipTests -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true {code} Looks pretty good overall. Some small items. These bullet points is not rendered as such: {code} +*random gets without a limited time range +*frequent deletes and updates {code} We can make this to read: {code}a high number, e.g. 60{code} Add one more space here: {code} +You also need t {code} This is the not needed with online-config change: {code} If the table already exists, disable the table. {code} This should be updated: {code} + See <> for more information. {code} Add a link to your nice design doc at the end? > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15337-v1.patch, HBASE-15337-v2.patch, > HBASE-15337.patch > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15753) Site does not build with the instructions in the book
Enis Soztutar created HBASE-15753: - Summary: Site does not build with the instructions in the book Key: HBASE-15753 URL: https://issues.apache.org/jira/browse/HBASE-15753 Project: HBase Issue Type: Bug Reporter: Enis Soztutar Originally reported by [~clarax98007] in HBASE-15337. Instructions in the book say to run: {code} mvn site -DskipTests {code} But it fails with javadoc related errors. Seems that we are using this in the jenkins job that [~misty] had setup (https://builds.apache.org/job/hbase_generate_website/): {code}mvn site -DskipTests -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true {code} We should either fix the javadoc, or update the instructions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14920) Compacting Memstore
[ https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267896#comment-15267896 ] Hadoop QA commented on HBASE-14920: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 1s {color} | {color:blue} rubocop was not available. {color} | | {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 1s {color} | {color:blue} Ruby-lint was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 10 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 8s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 20s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 6s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s {color} | {color:red} branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 12s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 16s {color} | {color:red} hbase-common: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s {color} | {color:red} hbase-client: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 0s {color} | {color:red} hbase-server: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s {color} | {color:red} hbase-shell: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s {color} | {color:red} hbase-it: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 15s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s {color} | {color:red} patch/hbase-it no findbugs output file (hbase-it/target/findbu
[jira] [Commented] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267859#comment-15267859 ] Enis Soztutar commented on HBASE-15752: --- Thanks Ted. This is the correct approach. However, there is no need for {{getNonDefaultWALCellCodecClass}}. We can just add the jar of WALCodec to the dependency jars regardless of whether it is default or not. > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15658) RegionServerCallable / RpcRetryingCaller clear meta cache on retries
[ https://issues.apache.org/jira/browse/HBASE-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling updated HBASE-15658: -- Resolution: Fixed Status: Resolved (was: Patch Available) This has been committed to 1.3.0+. [~busbey] if you want this in 1.2, let me know and I will do a backport JIRA. It should apply without any changes. > RegionServerCallable / RpcRetryingCaller clear meta cache on retries > > > Key: HBASE-15658 > URL: https://issues.apache.org/jira/browse/HBASE-15658 > Project: HBase > Issue Type: Sub-task > Components: Client >Affects Versions: 1.2.1 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Critical > Fix For: 2.0.0, 1.3.0, 1.4.0 > > Attachments: hbase-15658.001.patch, hbase-15658.002.patch, > hbase-15658.branch-1.3.001.patch > > > When RpcRetryingCaller.callWithRetries() attempts a retry, it calls > RetryingCallable.prepare(tries != 0). For RegionServerCallable (and probably > others), this will wind up calling > RegionLocator.getRegionLocation(reload=true), which will drop the meta cache > for the given region and always go back to meta. > This is kind of silly, since in the case of exceptions, we already call > RetryingCallable.throwable(), which goes to great pains to only refresh the > meta cache when necessary. Since we are already doing this on failure, I > don't really understand why we are doing duplicate work to refresh the meta > cache on prepare() at all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen resolved HBASE-15720. --- Resolution: Fixed > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267778#comment-15267778 ] Mikhail Antonov commented on HBASE-15720: - thanks for the fix! this one could be resolved again? > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267774#comment-15267774 ] Mikhail Antonov commented on HBASE-15691: - [~apurtell] Looked more at the patch, - as [~busbey] noted, there's two unynchronized methods accessing bucket lists - findAndRemoveCompletelyFreeBucket() and freeBlock(). Since this is a backport, should we also synchronize them? If so, that would then need to go to other branches? - seems like the optimization in HBASE-14624 would be easily incorporable into this patch, just changing field types? Let me know if I can help with reviews here or otherwise... > Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to > branch-1 > - > > Key: HBASE-15691 > URL: https://issues.apache.org/jira/browse/HBASE-15691 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.3.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell > Fix For: 1.3.0, 1.2.2 > > Attachments: HBASE-15691-branch-1.patch > > > HBASE-10205 was committed to trunk and 0.98 branches only. To preserve > continuity we should commit it to branch-1. The change requires more than > nontrivial fixups so I will attach a backport of the change from trunk to > current branch-1 here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267775#comment-15267775 ] Hadoop QA commented on HBASE-15741: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 32s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 7s {color} | {color:red} hbase-client: patch generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 37s {color} | {color:red} hbase-server: patch generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 52s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 40s {color} | {color:red} hbase-server in the patch failed. {
[jira] [Commented] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267770#comment-15267770 ] Hudson commented on HBASE-15720: SUCCESS: Integrated in HBase-1.2-IT #496 (See [https://builds.apache.org/job/HBase-1.2-IT/496/]) HBASE-15720 Print row locks at the debug dump page; addendum (chenheng: rev bf13941592ab0c947ce76cf4c353696414fc1067) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSDumpServlet.java > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15749) Shade guava dependency
[ https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-15749. --- Resolution: Not A Problem Resolving as not a problem. We got here because HBASE-15737 had StopWatch in hbase-server as a 'problem'. Attempts at eliciting why it a problem got "Our codebase should be consistent w.r.t. the usage of stop watch" and "... reduce the chance of incompatibilities in case newer version of Guava is involved", and so on. HBASE-15737 seems to have been prompted by HBASE-14963 where suggestions of shaded client didn't carry because version concerned was older -- pre-shaded-client. HBASE-14963 included suggestion of shading guava that was repeated by me in HBASE-15737 when in this later context I should have talked up shaded modules instead. > Shade guava dependency > -- > > Key: HBASE-15749 > URL: https://issues.apache.org/jira/browse/HBASE-15749 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > HBase codebase uses Guava library extensively. > There have been JIRAs such as HBASE-14963 which tried to make compatibility > story around Guava better. > Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. > Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267733#comment-15267733 ] Heng Chen commented on HBASE-15720: --- Fix compile error on branch-1.2 and branch-1.0, recheck branch-1.1 and 0.98, they look good. Sorry again for that. > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen updated HBASE-15720: -- Attachment: HBASE-15720-branch-1.0-addendum.patch > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.0-addendum.patch, HBASE-15720-branch-1.2-addendum.patch, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heng Chen updated HBASE-15720: -- Attachment: HBASE-15720-branch-1.2-addendum.patch I am really sorry for that. I will be more careful next time > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720-branch-1.2-addendum.patch, HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267700#comment-15267700 ] Mikhail Antonov commented on HBASE-15741: - +1 pending tests run > TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3 > --- > > Key: HBASE-15741 > URL: https://issues.apache.org/jira/browse/HBASE-15741 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.3.0 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Blocker > Fix For: 1.3.0 > > Attachments: HBASE-15741.001.patch, HBASE-15741.002.patch > > > Attempting to run a map reduce job with a 1.3 client on a secure cluster > running 1.2 is failing when making the coprocessor rpc to obtain a delegation > token: > {noformat} > Exception in thread "main" > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered > coprocessor service found for name hbase.pb.AuthenticationService in region > hbase:meta,,1 > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7741) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:137) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) > at > org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:490) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:209) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:162) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:285) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUt
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Attachment: 15752.v1.patch > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Attachment: (was: 15752.v1.patch) > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling updated HBASE-15741: -- Attachment: HBASE-15741.002.patch Include license headers for CoprocessorRpcUtils and TestCoprocessorRpcUtils. > TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3 > --- > > Key: HBASE-15741 > URL: https://issues.apache.org/jira/browse/HBASE-15741 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.3.0 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Blocker > Fix For: 1.3.0 > > Attachments: HBASE-15741.001.patch, HBASE-15741.002.patch > > > Attempting to run a map reduce job with a 1.3 client on a secure cluster > running 1.2 is failing when making the coprocessor rpc to obtain a delegation > token: > {noformat} > Exception in thread "main" > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered > coprocessor service found for name hbase.pb.AuthenticationService in region > hbase:meta,,1 > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7741) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:137) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) > at > org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:490) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:209) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:162) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:285) > at > org.apache.hadoop
[jira] [Updated] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-15720: Affects Version/s: 1.2.1 > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267674#comment-15267674 ] Mikhail Antonov commented on HBASE-15720: - 1.3, branch-1 and master aren't affected (at least compilation-wise), just rebuilt those three. > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.2.1 >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them
[ https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-15703: Resolution: Fixed Hadoop Flags: Reviewed Release Note: With previous deadline mode of RPC scheduling (the implementation in SimpleRpcScheduler, which is basically a FIFO except that long-running scans are de-prioritized) and FIFO-based RPC scheduler clients are getting CallQueueTooBigException when RPC call queue is full. With this patch and when hbase.ipc.server.callqueue.type property is set to "codel" mode, clients will also be getting CallDroppedException, which means that the request was discarded by the server as it considers itself to be overloaded and starts to drop requests to avoid going down under the load. The clients will retry upon receiving this exception. It doesn't clear MetaCache with region locations. Status: Resolved (was: Patch Available) Thanks [~eclark]. Pushed to master, branch-1, branch-1.3. > Deadline scheduler needs to return to the client info about skipped calls, > not just drop them > - > > Key: HBASE-15703 > URL: https://issues.apache.org/jira/browse/HBASE-15703 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-15703-branch-1.3.v1.patch, > HBASE-15703-branch-1.3.v2.patch > > > In AdaptiveLifoCodelCallQueue we drop the calls when we think we're > overloaded, we should instead return CallDroppedException to the cleent or > something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15751) Fixed HBase compilation failure with Zookeeper 3.5 and bumped HBase to use zookeeper 3.5
[ https://issues.apache.org/jira/browse/HBASE-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufeng Jiang updated HBASE-15751: - Priority: Minor (was: Major) > Fixed HBase compilation failure with Zookeeper 3.5 and bumped HBase to use > zookeeper 3.5 > > > Key: HBASE-15751 > URL: https://issues.apache.org/jira/browse/HBASE-15751 > Project: HBase > Issue Type: Task > Components: Zookeeper >Affects Versions: master >Reporter: Yufeng Jiang >Priority: Minor > Fix For: master > > Attachments: HBASE-15751.patch > > > From zookeeper 3.5 and onwards, runFromConfig(QuorumPeerConfig config) method > throws AdminServerException. > HBase uses runFromConfig in HQuorumPeer.java and hence needs to throw this > exception as well. > I've created a patch to make HBase compatible with zookeeper-3.5.1-alpha. > However, since zookeeper 3.5+ does not have a stable version yet, I don't > think we should commit this patch. Instead, I suggest using this JIRA to > track this issue. Once zookeeper releases stable version of 3.5+, I could > create another patch to bump the zookeeper version in HBase trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267612#comment-15267612 ] Mikhail Antonov commented on HBASE-15741: - LGTM. CoprocessorRpcUtils needs license header. > TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3 > --- > > Key: HBASE-15741 > URL: https://issues.apache.org/jira/browse/HBASE-15741 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.3.0 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Blocker > Fix For: 1.3.0 > > Attachments: HBASE-15741.001.patch > > > Attempting to run a map reduce job with a 1.3 client on a secure cluster > running 1.2 is failing when making the coprocessor rpc to obtain a delegation > token: > {noformat} > Exception in thread "main" > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered > coprocessor service found for name hbase.pb.AuthenticationService in region > hbase:meta,,1 > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7741) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:137) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) > at > org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:490) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:209) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:162) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:285) > at > org.apache.hadoop.hbase.mapreduce.TableMapRedu
[jira] [Reopened] (HBASE-15720) Print row locks at the debug dump page
[ https://issues.apache.org/jira/browse/HBASE-15720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reopened HBASE-15720: - Reopening. this broke compilation at least on branch-1.2. Please fix ASAP and in the future make sure building passes (atleast with {{-DskipTests}}) prior to pushing backports. > Print row locks at the debug dump page > -- > > Key: HBASE-15720 > URL: https://issues.apache.org/jira/browse/HBASE-15720 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2, 0.98.20, 1.0.5 > > Attachments: 4742C21D-B9CE-4921-9B32-CC319488EC64.png, > HBASE-15720.patch > > > We had to debug cases where some handlers are holding row locks for an > extended time (and maybe leak) and other handlers are getting timeouts for > obtaining row locks. > We should add row lock information at the debug page at the RS UI to be able > to live-debug such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Status: Patch Available (was: Open) > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
[ https://issues.apache.org/jira/browse/HBASE-15752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15752: --- Attachment: 15752.v1.patch > ClassNotFoundException is encountered when custom WAL codec is not found in > WALPlayer job > - > > Key: HBASE-15752 > URL: https://issues.apache.org/jira/browse/HBASE-15752 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: 15752.v1.patch > > > [~cartershanklin] reported the following when he tried out back / restore > feature in a Phoenix enabled deployment: > {code} > 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot > get log reader > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) > Caused by: java.lang.UnsupportedOperationException: Unable to find > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) > at > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) > at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) > ... 12 more > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > {code} > This was due to the IndexedWALEditCodec (specified thru > hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop > classpath. > WALPlayer should handle this situation better by adding the jar for > IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15752) ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job
Ted Yu created HBASE-15752: -- Summary: ClassNotFoundException is encountered when custom WAL codec is not found in WALPlayer job Key: HBASE-15752 URL: https://issues.apache.org/jira/browse/HBASE-15752 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu [~cartershanklin] reported the following when he tried out back / restore feature in a Phoenix enabled deployment: {code} 2016-05-02 18:57:58,578 FATAL [IPC Server handler 2 on 38194] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1462215011294_0001_m_00_0 - exited : java.io. IOException: Cannot get log reader at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:344) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:266) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:254) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:403) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:152) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1708) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.lang.UnsupportedOperationException: Unable to find org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec at org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:36) at org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:103) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.getCodec(ProtobufLogReader.java:282) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initAfterCompression(ProtobufLogReader.java:292) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:82) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:149) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:301) ... 12 more Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) {code} This was due to the IndexedWALEditCodec (specified thru hbase.regionserver.wal.codec) used by Phoenix being absent in hadoop classpath. WALPlayer should handle this situation better by adding the jar for IndexedWALEditCodec class to mapreduce job dependency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15281) Allow the FileSystem inside HFileSystem to be wrapped
[ https://issues.apache.org/jira/browse/HBASE-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-15281: Resolution: Fixed Release Note: This patch adds new configuration property - hbase.fs.wrapper. If provided, it should be fully qualified class name of the class used as a pluggable wrapper for HFileSystem. This may be useful for specific debugging/tracing needs. Status: Resolved (was: Patch Available) > Allow the FileSystem inside HFileSystem to be wrapped > - > > Key: HBASE-15281 > URL: https://issues.apache.org/jira/browse/HBASE-15281 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration, hbase >Affects Versions: 2.0.0, 1.2.0 >Reporter: Rajesh Nishtala >Assignee: Rajesh Nishtala > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15281-v1.patch > > > It would be very useful for us to be able to wrap the filesystems > encapsulated by HFileSystem with other FilterFileSystems. This allows for > more detailed logging of the operations to the DFS. Internally, the data > logged from this method has allowed us to show application engineers where > there schemas are inefficient and inducing too much IO. This patch will just > allow the filesystem to be pluggable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15281) Allow the FileSystem inside HFileSystem to be wrapped
[ https://issues.apache.org/jira/browse/HBASE-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267588#comment-15267588 ] Mikhail Antonov commented on HBASE-15281: - Pushed Rajesh's patch with slight changes to master, branch-1, branch-1.3. Thanks [~rajesh0042]! > Allow the FileSystem inside HFileSystem to be wrapped > - > > Key: HBASE-15281 > URL: https://issues.apache.org/jira/browse/HBASE-15281 > Project: HBase > Issue Type: New Feature > Components: Filesystem Integration, hbase >Affects Versions: 2.0.0, 1.2.0 >Reporter: Rajesh Nishtala >Assignee: Rajesh Nishtala > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15281-v1.patch > > > It would be very useful for us to be able to wrap the filesystems > encapsulated by HFileSystem with other FilterFileSystems. This allows for > more detailed logging of the operations to the DFS. Internally, the data > logged from this method has allowed us to show application engineers where > there schemas are inefficient and inducing too much IO. This patch will just > allow the filesystem to be pluggable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15749) Shade guava dependency
[ https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267581#comment-15267581 ] Nick Dimiduk commented on HBASE-15749: -- We need to get to a place where we only ship a thoroughly shaded client. Don't make downstreamers adapt to our crazy hadoop dependencies. I've burned a lot of time on this problem up and down the stack over the last couple months; I'm convinced this is the only way forward. bq. I don't understand what this issue is proposing. Agreed, context would be helpful. I don't think we want to be in the business of partial shading here, full shading there. > Shade guava dependency > -- > > Key: HBASE-15749 > URL: https://issues.apache.org/jira/browse/HBASE-15749 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > HBase codebase uses Guava library extensively. > There have been JIRAs such as HBASE-14963 which tried to make compatibility > story around Guava better. > Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. > Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling updated HBASE-15741: -- Status: Patch Available (was: Open) > TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3 > --- > > Key: HBASE-15741 > URL: https://issues.apache.org/jira/browse/HBASE-15741 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.3.0 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Blocker > Fix For: 1.3.0 > > Attachments: HBASE-15741.001.patch > > > Attempting to run a map reduce job with a 1.3 client on a secure cluster > running 1.2 is failing when making the coprocessor rpc to obtain a delegation > token: > {noformat} > Exception in thread "main" > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered > coprocessor service found for name hbase.pb.AuthenticationService in region > hbase:meta,,1 > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7741) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:137) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) > at > org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:490) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:209) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:162) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:285) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:86) > at >
[jira] [Updated] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling updated HBASE-15741: -- Attachment: HBASE-15741.001.patch Attaching a patch that makes coprocessor services in the HBase protobuf package (hbase.pb) register with unqualified names. This will keep consistent operation across the change in HBASE-14077. All other services continue to use fully-qualified names, which should still protect against collisions in all other cases. > TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3 > --- > > Key: HBASE-15741 > URL: https://issues.apache.org/jira/browse/HBASE-15741 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.3.0 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Blocker > Fix For: 1.3.0 > > Attachments: HBASE-15741.001.patch > > > Attempting to run a map reduce job with a 1.3 client on a secure cluster > running 1.2 is failing when making the coprocessor rpc to obtain a delegation > token: > {noformat} > Exception in thread "main" > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered > coprocessor service found for name hbase.pb.AuthenticationService in region > hbase:meta,,1 > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7741) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:137) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) > at > org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:490) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:209) > at > org.apache.hadoop.hbase.map
[jira] [Assigned] (HBASE-15741) TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3
[ https://issues.apache.org/jira/browse/HBASE-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Helmling reassigned HBASE-15741: - Assignee: Gary Helmling > TokenProvider coprocessor RPC incompatibile between 1.2 and 1.3 > --- > > Key: HBASE-15741 > URL: https://issues.apache.org/jira/browse/HBASE-15741 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 1.3.0 >Reporter: Gary Helmling >Assignee: Gary Helmling >Priority: Blocker > Fix For: 1.3.0 > > > Attempting to run a map reduce job with a 1.3 client on a secure cluster > running 1.2 is failing when making the coprocessor rpc to obtain a delegation > token: > {noformat} > Exception in thread "main" > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: > org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered > coprocessor service found for name hbase.pb.AuthenticationService in region > hbase:meta,,1 > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7741) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1631) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:137) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:108) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at > org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) > at > org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340) > at > org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) > at > org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:490) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:209) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:162) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:285) > at > org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:86) > at > org.apache.hadoop.hbase.mapreduce.CellCounter.create
[jira] [Updated] (HBASE-15751) Fixed HBase compilation failure with Zookeeper 3.5 and bumped HBase to use zookeeper 3.5
[ https://issues.apache.org/jira/browse/HBASE-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufeng Jiang updated HBASE-15751: - Summary: Fixed HBase compilation failure with Zookeeper 3.5 and bumped HBase to use zookeeper 3.5 (was: Fixed HBase compilation failed with Zookeeper 3.5 and bump HBase to use zookeeper 3.5) > Fixed HBase compilation failure with Zookeeper 3.5 and bumped HBase to use > zookeeper 3.5 > > > Key: HBASE-15751 > URL: https://issues.apache.org/jira/browse/HBASE-15751 > Project: HBase > Issue Type: Task > Components: Zookeeper >Affects Versions: master >Reporter: Yufeng Jiang > Fix For: master > > Attachments: HBASE-15751.patch > > > From zookeeper 3.5 and onwards, runFromConfig(QuorumPeerConfig config) method > throws AdminServerException. > HBase uses runFromConfig in HQuorumPeer.java and hence needs to throw this > exception as well. > I've created a patch to make HBase compatible with zookeeper-3.5.1-alpha. > However, since zookeeper 3.5+ does not have a stable version yet, I don't > think we should commit this patch. Instead, I suggest using this JIRA to > track this issue. Once zookeeper releases stable version of 3.5+, I could > create another patch to bump the zookeeper version in HBase trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15751) Fixed HBase compilation failed with Zookeeper 3.5 and bump HBase to use zookeeper 3.5
[ https://issues.apache.org/jira/browse/HBASE-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufeng Jiang updated HBASE-15751: - Attachment: HBASE-15751.patch Attached patch for zookeeper 3.5.1-alpha for anyone to play with. > Fixed HBase compilation failed with Zookeeper 3.5 and bump HBase to use > zookeeper 3.5 > - > > Key: HBASE-15751 > URL: https://issues.apache.org/jira/browse/HBASE-15751 > Project: HBase > Issue Type: Task > Components: Zookeeper >Affects Versions: master >Reporter: Yufeng Jiang > Fix For: master > > Attachments: HBASE-15751.patch > > > From zookeeper 3.5 and onwards, runFromConfig(QuorumPeerConfig config) method > throws AdminServerException. > HBase uses runFromConfig in HQuorumPeer.java and hence needs to throw this > exception as well. > I've created a patch to make HBase compatible with zookeeper-3.5.1-alpha. > However, since zookeeper 3.5+ does not have a stable version yet, I don't > think we should commit this patch. Instead, I suggest using this JIRA to > track this issue. Once zookeeper releases stable version of 3.5+, I could > create another patch to bump the zookeeper version in HBase trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15751) Fixed HBase compilation failed with Zookeeper 3.5 and bump HBase to use zookeeper 3.5
Yufeng Jiang created HBASE-15751: Summary: Fixed HBase compilation failed with Zookeeper 3.5 and bump HBase to use zookeeper 3.5 Key: HBASE-15751 URL: https://issues.apache.org/jira/browse/HBASE-15751 Project: HBase Issue Type: Task Components: Zookeeper Affects Versions: master Reporter: Yufeng Jiang Fix For: master >From zookeeper 3.5 and onwards, runFromConfig(QuorumPeerConfig config) method >throws AdminServerException. HBase uses runFromConfig in HQuorumPeer.java and hence needs to throw this exception as well. I've created a patch to make HBase compatible with zookeeper-3.5.1-alpha. However, since zookeeper 3.5+ does not have a stable version yet, I don't think we should commit this patch. Instead, I suggest using this JIRA to track this issue. Once zookeeper releases stable version of 3.5+, I could create another patch to bump the zookeeper version in HBase trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15749) Shade guava dependency
[ https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267532#comment-15267532 ] Enis Soztutar commented on HBASE-15749: --- Client-side, we already have shaded jars. I don't understand what this issue is proposing. Server-side I don't think we should do shading for coprocessors. It is the coprocessor who has to adopt to us, not the other way around. If a coprocessor implementation wants to depend on a different guava, they can do their own shading. > Shade guava dependency > -- > > Key: HBASE-15749 > URL: https://issues.apache.org/jira/browse/HBASE-15749 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > HBase codebase uses Guava library extensively. > There have been JIRAs such as HBASE-14963 which tried to make compatibility > story around Guava better. > Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. > Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15750) Add on meta deserialization
Elliott Clark created HBASE-15750: - Summary: Add on meta deserialization Key: HBASE-15750 URL: https://issues.apache.org/jira/browse/HBASE-15750 Project: HBase Issue Type: Sub-task Reporter: Elliott Clark -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-15748) Don't link in static libunwind.
[ https://issues.apache.org/jira/browse/HBASE-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark resolved HBASE-15748. --- Resolution: Fixed Assignee: Elliott Clark > Don't link in static libunwind. > --- > > Key: HBASE-15748 > URL: https://issues.apache.org/jira/browse/HBASE-15748 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HBASE-15748.patch > > > A static libunwind compiled with gcc prevents clang from catching exceptions. > So just add the dynamic one. :-/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15749) Shade guava dependency
[ https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267509#comment-15267509 ] Sean Busbey commented on HBASE-15749: - I presume this means all of our internal use. This would remove it from conflict with e.g. coprocessors or other server-side third party pluggable components. > Shade guava dependency > -- > > Key: HBASE-15749 > URL: https://issues.apache.org/jira/browse/HBASE-15749 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > HBase codebase uses Guava library extensively. > There have been JIRAs such as HBASE-14963 which tried to make compatibility > story around Guava better. > Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. > Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14920) Compacting Memstore
[ https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267488#comment-15267488 ] Eshcar Hillel commented on HBASE-14920: --- New patch is available. Also available in RB https://reviews.apache.org/r/45080/. I changed the setting of the inmemory flush threshold according to what [~anoop.hbase] have suggested. Anoop also raised a concern that flushing the tail of the compaction pipeline is not enough. As I see it, a call to flush data to disk aims for reducing the memory held by the region, and this goal is achieved. Furthermore, in most cases the largest portion of the data resides in the tail segment in the pipeline, therefore almost all the data will be flushed to disk. Finally this wouldn’t be the first case where you need to flush more than once in order to completely empty the memory - see HRegion#doClose() bq. After an in memory flush, we will reduce some heap overhead and will reduce that delta from memstore size. I can see a call to reduce the variable what we keep at RS level. Another we have at every HRegion level, do we update that also? After an in-memory compaction is completed the memstore invokes RegionServicesForStores#addAndGetGlobalMemstoreSize(size) which then invokes HRegion#addAndGetGlobalMemstoreSize(size) which updates the region counter and takes care to update the RegionServer counter. {code} public long addAndGetGlobalMemstoreSize(long memStoreSize) { if (this.rsAccounting != null) { rsAccounting.addAndGetGlobalMemstoreSize(memStoreSize); } return this.memstoreSize.addAndGet(memStoreSize); } {code} None of the counters (RS, region, segment) are new, all of them existed before this patch, so I fail to see the problem. bq. No need of a region lock in this case The region lock is *only* held while flushing the active segment into the pipeline, and *not* during compaction {code} void flushInMemory() throws IOException { // Phase I: Update the pipeline getRegionServices().blockUpdates(); try { MutableSegment active = getActive(); LOG.info("IN-MEMORY FLUSH: Pushing active segment into compaction pipeline, " + "and initiating compaction."); pushActiveToPipeline(active); } finally { getRegionServices().unblockUpdates(); } ... {code} > Compacting Memstore > --- > > Key: HBASE-14920 > URL: https://issues.apache.org/jira/browse/HBASE-14920 > Project: HBase > Issue Type: Sub-task >Reporter: Eshcar Hillel >Assignee: Eshcar Hillel > Attachments: HBASE-14920-V01.patch, HBASE-14920-V02.patch, > HBASE-14920-V03.patch, HBASE-14920-V04.patch, HBASE-14920-V05.patch, > move.to.junit4.patch > > > Implementation of a new compacting memstore with non-optimized immutable > segment representation -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14920) Compacting Memstore
[ https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eshcar Hillel updated HBASE-14920: -- Attachment: HBASE-14920-V05.patch > Compacting Memstore > --- > > Key: HBASE-14920 > URL: https://issues.apache.org/jira/browse/HBASE-14920 > Project: HBase > Issue Type: Sub-task >Reporter: Eshcar Hillel >Assignee: Eshcar Hillel > Attachments: HBASE-14920-V01.patch, HBASE-14920-V02.patch, > HBASE-14920-V03.patch, HBASE-14920-V04.patch, HBASE-14920-V05.patch, > move.to.junit4.patch > > > Implementation of a new compacting memstore with non-optimized immutable > segment representation -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them
[ https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267445#comment-15267445 ] Elliott Clark commented on HBASE-15703: --- +1 lgtm > Deadline scheduler needs to return to the client info about skipped calls, > not just drop them > - > > Key: HBASE-15703 > URL: https://issues.apache.org/jira/browse/HBASE-15703 > Project: HBase > Issue Type: Bug > Components: IPC/RPC >Affects Versions: 1.3.0 >Reporter: Mikhail Antonov >Assignee: Mikhail Antonov >Priority: Critical > Fix For: 1.3.0 > > Attachments: HBASE-15703-branch-1.3.v1.patch, > HBASE-15703-branch-1.3.v2.patch > > > In AdaptiveLifoCodelCallQueue we drop the calls when we think we're > overloaded, we should instead return CallDroppedException to the cleent or > something. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15749) Shade guava dependency
[ https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267442#comment-15267442 ] Elliott Clark commented on HBASE-15749: --- Isn't that exactly what the hbase-shaded pom's do ? > Shade guava dependency > -- > > Key: HBASE-15749 > URL: https://issues.apache.org/jira/browse/HBASE-15749 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > HBase codebase uses Guava library extensively. > There have been JIRAs such as HBASE-14963 which tried to make compatibility > story around Guava better. > Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. > Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15333) Enhance the filter to handle short, integer, long, float and double
[ https://issues.apache.org/jira/browse/HBASE-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267434#comment-15267434 ] Zhan Zhang commented on HBASE-15333: Thanks for the feedback, and I will restructure the code and sent out for review this week. > Enhance the filter to handle short, integer, long, float and double > --- > > Key: HBASE-15333 > URL: https://issues.apache.org/jira/browse/HBASE-15333 > Project: HBase > Issue Type: Sub-task >Reporter: Zhan Zhang >Assignee: Zhan Zhang > Attachments: HBASE-15333-1.patch, HBASE-15333-2.patch, > HBASE-15333-3.patch, HBASE-15333-4.patch, HBASE-15333-5.patch > > > Currently, the range filter is based on the order of bytes. But for java > primitive type, such as short, int, long, double, float, etc, their order is > not consistent with their byte order, extra manipulation has to be in place > to take care of them correctly. > For example, for the integer range (-100, 100), the filter <= 1, the current > filter will return 0 and 1, and the right return value should be (-100, 1] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clara Xiong updated HBASE-15337: Attachment: HBASE-15337-v2.patch Updated based on Ted Yu's feedback. [~enis] any more concerns? Is everything is good, can we commit? > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15337-v1.patch, HBASE-15337-v2.patch, > HBASE-15337.patch > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267401#comment-15267401 ] Mikhail Antonov commented on HBASE-15666: - I see, sure. > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > Fix For: 1.4.0 > > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267397#comment-15267397 ] Sean Busbey commented on HBASE-15666: - because without this, no one trying to use the shaded version of our client libraries can actually test. please leave as critical. > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > Fix For: 1.4.0 > > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267394#comment-15267394 ] Mikhail Antonov commented on HBASE-15666: - I'd also reduce the severity to major, I'm not sure why is that critical? > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > Fix For: 1.4.0 > > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267391#comment-15267391 ] Mikhail Antonov commented on HBASE-15666: - Thanks [~busbey], moved out to 1.4, anyone who wants to pick it please feel free to bring it back to 1.3. > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > Fix For: 1.4.0 > > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15041) Clean up javadoc errors and reenable jdk8 linter
[ https://issues.apache.org/jira/browse/HBASE-15041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15041: Fix Version/s: (was: 1.3.0) no, I don't expect this done any time soon. > Clean up javadoc errors and reenable jdk8 linter > > > Key: HBASE-15041 > URL: https://issues.apache.org/jira/browse/HBASE-15041 > Project: HBase > Issue Type: Umbrella > Components: build, documentation >Affects Versions: 2.0.0, 1.2.0, 1.3.0 >Reporter: Sean Busbey > Fix For: 2.0.0 > > > umbrella to clean up our various errors according to the jdk8 javadoc linter. > plan is a sub-task per module. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-15666: Fix Version/s: (was: 1.3.0) 1.4.0 > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > Fix For: 1.4.0 > > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util
[ https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267386#comment-15267386 ] Sean Busbey commented on HBASE-15666: - I would love it in 1.3, but I don't have the time. Presuming no one else self-assigns I'd say bump to 1.4. > shaded dependencies for hbase-testing-util > -- > > Key: HBASE-15666 > URL: https://issues.apache.org/jira/browse/HBASE-15666 > Project: HBase > Issue Type: New Feature > Components: test >Affects Versions: 1.1.0, 1.2.0 >Reporter: Sean Busbey >Priority: Critical > Fix For: 1.3.0 > > > Folks that make use of our shaded client but then want to test things using > the hbase-testing-util end up getting all of our dependencies again in the > test scope. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15727) Canary Tool for Zookeeper
[ https://issues.apache.org/jira/browse/HBASE-15727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267350#comment-15267350 ] Hadoop QA commented on HBASE-15727: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} master passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 44s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 136m 6s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestNamespaceCommands | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801776/HBASE-15727-v3.patch | | JIRA Issue | HBASE-15727 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh | | git revision | master / d113058 | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 | | findbugs | v3.0.0 | | unit | https://builds
[jira] [Commented] (HBASE-15749) Shade guava dependency
[ https://issues.apache.org/jira/browse/HBASE-15749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267328#comment-15267328 ] Sean Busbey commented on HBASE-15749: - Avro did this as well, would recommend looking at their approach. > Shade guava dependency > -- > > Key: HBASE-15749 > URL: https://issues.apache.org/jira/browse/HBASE-15749 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu > > HBase codebase uses Guava library extensively. > There have been JIRAs such as HBASE-14963 which tried to make compatibility > story around Guava better. > Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. > Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15737) Remove use of Guava Stopwatch
[ https://issues.apache.org/jira/browse/HBASE-15737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15737: --- Resolution: Not A Problem Status: Resolved (was: Patch Available) > Remove use of Guava Stopwatch > - > > Key: HBASE-15737 > URL: https://issues.apache.org/jira/browse/HBASE-15737 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: 15737.v1.txt, 15737.v2.txt, 15737.v3.txt > > > HBASE-14963 removed reference to Guava Stopwatch from hbase-client module. > However, there're still 3 classes referring to Guava Stopwatch : > hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java:import > com.google.common.base.Stopwatch; > hbase-server/src/main/java/org/apache/hadoop/hbase/util/JvmPauseMonitor.java:import > com.google.common.base.Stopwatch; > hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java:import > com.google.common.base.Stopwatch; > We should remove reference to Guava Stopwatch. > hadoop is no longer referencing Guava Stopwatch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15749) Shade guava dependency
Ted Yu created HBASE-15749: -- Summary: Shade guava dependency Key: HBASE-15749 URL: https://issues.apache.org/jira/browse/HBASE-15749 Project: HBase Issue Type: Improvement Reporter: Ted Yu HBase codebase uses Guava library extensively. There have been JIRAs such as HBASE-14963 which tried to make compatibility story around Guava better. Long term fix, as suggested over in HBASE-14963, is to shade Guava dependency. Future use of Guava in HBase would be more secure once shading is done. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15714) We are calling checkRow() twice in doMiniBatchMutation()
[ https://issues.apache.org/jira/browse/HBASE-15714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267275#comment-15267275 ] Esteban Gutierrez commented on HBASE-15714: --- +1 LGTM. > We are calling checkRow() twice in doMiniBatchMutation() > > > Key: HBASE-15714 > URL: https://issues.apache.org/jira/browse/HBASE-15714 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Heng Chen > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-15714.patch, HBASE-15714_v1.patch, > HBASE-15714_v2.patch > > > In {{multi()}} -> {{doMiniBatchMutation()}} code path, we end up calling > {{checkRow()}} twice, once from {{checkBatchOp()}} and once from > {{getRowLock()}}. > See [~anoop.hbase]'s comments at > https://issues.apache.org/jira/browse/HBASE-15600?focusedCommentId=15257636&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15257636. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15737) Remove use of Guava Stopwatch
[ https://issues.apache.org/jira/browse/HBASE-15737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267247#comment-15267247 ] stack commented on HBASE-15737: --- bq. I didn't use format-patch - I would be integrating the patch myself. Eh. The format-patch guidelines are for all contributors and especially for committers who are supposed to be setting an example with good practice for contributors to follow. Also, this patch is going in the wrong direction. Guava is a high-quality library, actively maintained, that we should be doubling down on, not purging from our code base. Your time would be better spent working on shading the Guava lib -- as suggested in HBASE-14963 -- so the project feels secure making use of Guava. > Remove use of Guava Stopwatch > - > > Key: HBASE-15737 > URL: https://issues.apache.org/jira/browse/HBASE-15737 > Project: HBase > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: 15737.v1.txt, 15737.v2.txt, 15737.v3.txt > > > HBASE-14963 removed reference to Guava Stopwatch from hbase-client module. > However, there're still 3 classes referring to Guava Stopwatch : > hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java:import > com.google.common.base.Stopwatch; > hbase-server/src/main/java/org/apache/hadoop/hbase/util/JvmPauseMonitor.java:import > com.google.common.base.Stopwatch; > hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java:import > com.google.common.base.Stopwatch; > We should remove reference to Guava Stopwatch. > hadoop is no longer referencing Guava Stopwatch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15337) Document FIFO and date tiered compaction in the book
[ https://issues.apache.org/jira/browse/HBASE-15337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267259#comment-15267259 ] Ted Yu commented on HBASE-15337: lgtm nit: {code} +You can configure you date tiers. {code} you -> your > Document FIFO and date tiered compaction in the book > > > Key: HBASE-15337 > URL: https://issues.apache.org/jira/browse/HBASE-15337 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Enis Soztutar > Fix For: 2.0.0, 1.3.0 > > Attachments: HBASE-15337-v1.patch, HBASE-15337.patch > > > We have two new compaction algorithms FIFO and Date tiered that are for time > series data. We should document how to use them in the book. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15748) Don't link in static libunwind.
[ https://issues.apache.org/jira/browse/HBASE-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-15748: -- Attachment: HBASE-15748.patch > Don't link in static libunwind. > --- > > Key: HBASE-15748 > URL: https://issues.apache.org/jira/browse/HBASE-15748 > Project: HBase > Issue Type: Sub-task >Reporter: Elliott Clark > Attachments: HBASE-15748.patch > > > A static libunwind compiled with gcc prevents clang from catching exceptions. > So just add the dynamic one. :-/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)