[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004173#comment-16004173 ] Hudson commented on HBASE-8758: --- SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #132 (See [https://builds.apache.org/job/HBase-1.2-JDK7/132/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev b6d1b19a3e5a645c7906fb00870775598f9d1514) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type
[ https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004172#comment-16004172 ] Anoop Sam John commented on HBASE-17343: Do we have a new issue for the compacting memstore problem? Or all in one patch? > Make Compacting Memstore default in 2.0 with BASIC as the default type > -- > > Key: HBASE-17343 > URL: https://issues.apache.org/jira/browse/HBASE-17343 > Project: HBase > Issue Type: New Feature > Components: regionserver >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: Anastasia Braginsky >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, > HBASE-17343-V04.patch, HBASE-17343-V05.patch, HBASE-17343-V06.patch, > HBASE-17343-V07.patch, HBASE-17343-V08.patch, HBASE-17343-V09.patch > > > FYI [~anastas], [~eshcar] and [~ebortnik]. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17931) Assign system tables to servers with highest version
[ https://issues.apache.org/jira/browse/HBASE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004170#comment-16004170 ] Hadoop QA commented on HBASE-17931: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 55s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 42s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 44s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 44s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 31s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 2s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 30s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 55s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 20s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 47s {color} | {color:red} The patch causes 14 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 13s {color} | {color:red} The patch causes 14 errors with Hadoop v2.7.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 37s {color} | {color:red} The patch causes 14 errors with Hadoop v2.7.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 13m 6s {color} | {color:red} The patch causes 14 errors with Hadoop v3.0.0-alpha2. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 30s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 43s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type
[ https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004163#comment-16004163 ] Chia-Ping Tsai commented on HBASE-17343: Sorry for late. The CompactingMemStore has a acid issue discussed in HBASE-17887. Shall we commit this later? > Make Compacting Memstore default in 2.0 with BASIC as the default type > -- > > Key: HBASE-17343 > URL: https://issues.apache.org/jira/browse/HBASE-17343 > Project: HBase > Issue Type: New Feature > Components: regionserver >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: Anastasia Braginsky >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, > HBASE-17343-V04.patch, HBASE-17343-V05.patch, HBASE-17343-V06.patch, > HBASE-17343-V07.patch, HBASE-17343-V08.patch, HBASE-17343-V09.patch > > > FYI [~anastas], [~eshcar] and [~ebortnik]. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004162#comment-16004162 ] Hudson commented on HBASE-8758: --- SUCCESS: Integrated in Jenkins build HBase-1.2-JDK8 #128 (See [https://builds.apache.org/job/HBase-1.2-JDK8/128/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev b6d1b19a3e5a645c7906fb00870775598f9d1514) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004159#comment-16004159 ] Hudson commented on HBASE-8758: --- SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #173 (See [https://builds.apache.org/job/HBase-1.3-JDK8/173/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev 286394ba636f3be0f27ced00d58227b48ff290e8) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17887) TestAcidGuarantees fails frequently
[ https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-17887: --- Status: Patch Available (was: Open) > TestAcidGuarantees fails frequently > --- > > Key: HBASE-17887 > URL: https://issues.apache.org/jira/browse/HBASE-17887 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Umesh Agashe >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.4.1 > > Attachments: HBASE-17887.branch-1.v0.patch, > HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, > HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, > HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v5.patch, HBASE-17887.ut.patch, HBASE-17887.v0.patch, > HBASE-17887.v1.patch, HBASE-17887.v2.patch, HBASE-17887.v3.patch > > > As per the flaky tests dashboard here: > https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html, > It fails 30% of the time. > While working on HBASE-17863, a few verification builds on patch failed due > to TestAcidGuarantees didn't pass. IMHO, the changes for HBASE-17863 are > unlikely to affect get/ put path. > I ran the test with and without the patch several times locally and found > that TestAcidGuarantees fails without the patch similar number of times. > Opening blocker, considering acid guarantees are critical to HBase. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17887) TestAcidGuarantees fails frequently
[ https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-17887: --- Attachment: HBASE-17887.branch-1.v5.patch bq. So to this we should also pass the memstoreScanners now right? Instead of the ticket? Yes, please see the branch-1.v5 patch. Thanks. > TestAcidGuarantees fails frequently > --- > > Key: HBASE-17887 > URL: https://issues.apache.org/jira/browse/HBASE-17887 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Umesh Agashe >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.4.1 > > Attachments: HBASE-17887.branch-1.v0.patch, > HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, > HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, > HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v5.patch, HBASE-17887.ut.patch, HBASE-17887.v0.patch, > HBASE-17887.v1.patch, HBASE-17887.v2.patch, HBASE-17887.v3.patch > > > As per the flaky tests dashboard here: > https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html, > It fails 30% of the time. > While working on HBASE-17863, a few verification builds on patch failed due > to TestAcidGuarantees didn't pass. IMHO, the changes for HBASE-17863 are > unlikely to affect get/ put path. > I ran the test with and without the patch several times locally and found > that TestAcidGuarantees fails without the patch similar number of times. > Opening blocker, considering acid guarantees are critical to HBase. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004151#comment-16004151 ] Hudson commented on HBASE-8758: --- SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #159 (See [https://builds.apache.org/job/HBase-1.3-JDK7/159/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev 286394ba636f3be0f27ced00d58227b48ff290e8) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17887) TestAcidGuarantees fails frequently
[ https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-17887: --- Status: Open (was: Patch Available) > TestAcidGuarantees fails frequently > --- > > Key: HBASE-17887 > URL: https://issues.apache.org/jira/browse/HBASE-17887 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Umesh Agashe >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.4.1 > > Attachments: HBASE-17887.branch-1.v0.patch, > HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, > HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, > HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.ut.patch, HBASE-17887.v0.patch, HBASE-17887.v1.patch, > HBASE-17887.v2.patch, HBASE-17887.v3.patch > > > As per the flaky tests dashboard here: > https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html, > It fails 30% of the time. > While working on HBASE-17863, a few verification builds on patch failed due > to TestAcidGuarantees didn't pass. IMHO, the changes for HBASE-17863 are > unlikely to affect get/ put path. > I ran the test with and without the patch several times locally and found > that TestAcidGuarantees fails without the patch similar number of times. > Opening blocker, considering acid guarantees are critical to HBase. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17917) Use pread by default for all user scan and switch to streaming read if needed
[ https://issues.apache.org/jira/browse/HBASE-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-17917: -- Attachment: HBASE-17917-v6.patch Rebase. > Use pread by default for all user scan and switch to streaming read if needed > - > > Key: HBASE-17917 > URL: https://issues.apache.org/jira/browse/HBASE-17917 > Project: HBase > Issue Type: Sub-task > Components: scan >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17917.patch, HBASE-17917-v1.patch, > HBASE-17917-v2.patch, HBASE-17917-v2.patch, HBASE-17917-v3.patch, > HBASE-17917-v4.patch, HBASE-17917-v5.patch, HBASE-17917-v6.patch > > > As said in the parent issue. We need some benchmark here first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17931) Assign system tables to servers with highest version
[ https://issues.apache.org/jira/browse/HBASE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004132#comment-16004132 ] Phil Yang commented on HBASE-17931: --- https://reviews.apache.org/r/59126 > Assign system tables to servers with highest version > > > Key: HBASE-17931 > URL: https://issues.apache.org/jira/browse/HBASE-17931 > Project: HBase > Issue Type: Bug > Components: scan >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Blocker > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17931.v01.patch > > > In branch-1 and master we have some improvement and new features on scanning > which is not compatible. > A client of old version to a server of new version is compatible (must be a > bug if not, maybe need some test?). > A client of new version may not be able to read from a server of old version > correctly (because of scan limit, moreResults flag, etc), which is ok for > major/minor upgrade and we can tell users to upgrade server before upgrading > client. But RS also use scan to read meta. If meta table is in RS of old > version, all RSs of new version may have trouble while scanning meta table. > So we should make sure meta table always in servers of new version. Force > meta table in Master and upgrade Master first, or assign meta table in region > servers with latest version? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17917) Use pread by default for all user scan and switch to streaming read if needed
[ https://issues.apache.org/jira/browse/HBASE-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004128#comment-16004128 ] Guanghao Zhang commented on HBASE-17917: +1. bq. And we can also keep improving the trySwitchToStreamRead method to make it more intelligent. Open a new scanner for streaming read will lead a new request to NN. So look forward to see this improvement in the future. > Use pread by default for all user scan and switch to streaming read if needed > - > > Key: HBASE-17917 > URL: https://issues.apache.org/jira/browse/HBASE-17917 > Project: HBase > Issue Type: Sub-task > Components: scan >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17917.patch, HBASE-17917-v1.patch, > HBASE-17917-v2.patch, HBASE-17917-v2.patch, HBASE-17917-v3.patch, > HBASE-17917-v4.patch, HBASE-17917-v5.patch > > > As said in the parent issue. We need some benchmark here first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17931) Assign system tables to servers with highest version
[ https://issues.apache.org/jira/browse/HBASE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Yang updated HBASE-17931: -- Summary: Assign system tables to servers with highest version (was: Assign system table to servers with highest version) > Assign system tables to servers with highest version > > > Key: HBASE-17931 > URL: https://issues.apache.org/jira/browse/HBASE-17931 > Project: HBase > Issue Type: Bug > Components: scan >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Blocker > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17931.v01.patch > > > In branch-1 and master we have some improvement and new features on scanning > which is not compatible. > A client of old version to a server of new version is compatible (must be a > bug if not, maybe need some test?). > A client of new version may not be able to read from a server of old version > correctly (because of scan limit, moreResults flag, etc), which is ok for > major/minor upgrade and we can tell users to upgrade server before upgrading > client. But RS also use scan to read meta. If meta table is in RS of old > version, all RSs of new version may have trouble while scanning meta table. > So we should make sure meta table always in servers of new version. Force > meta table in Master and upgrade Master first, or assign meta table in region > servers with latest version? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17931) Assign system table to servers with highest version
[ https://issues.apache.org/jira/browse/HBASE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Yang updated HBASE-17931: -- Fix Version/s: 1.4.0 Affects Version/s: (was: 1.4.0) Status: Patch Available (was: Open) > Assign system table to servers with highest version > --- > > Key: HBASE-17931 > URL: https://issues.apache.org/jira/browse/HBASE-17931 > Project: HBase > Issue Type: Bug > Components: scan >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Blocker > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17931.v01.patch > > > In branch-1 and master we have some improvement and new features on scanning > which is not compatible. > A client of old version to a server of new version is compatible (must be a > bug if not, maybe need some test?). > A client of new version may not be able to read from a server of old version > correctly (because of scan limit, moreResults flag, etc), which is ok for > major/minor upgrade and we can tell users to upgrade server before upgrading > client. But RS also use scan to read meta. If meta table is in RS of old > version, all RSs of new version may have trouble while scanning meta table. > So we should make sure meta table always in servers of new version. Force > meta table in Master and upgrade Master first, or assign meta table in region > servers with latest version? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17931) Assign system table to servers with highest version
[ https://issues.apache.org/jira/browse/HBASE-17931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Phil Yang updated HBASE-17931: -- Attachment: HBASE-17931.v01.patch Upload an initial patch to see if any UT failed. > Assign system table to servers with highest version > --- > > Key: HBASE-17931 > URL: https://issues.apache.org/jira/browse/HBASE-17931 > Project: HBase > Issue Type: Bug > Components: scan >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Blocker > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17931.v01.patch > > > In branch-1 and master we have some improvement and new features on scanning > which is not compatible. > A client of old version to a server of new version is compatible (must be a > bug if not, maybe need some test?). > A client of new version may not be able to read from a server of old version > correctly (because of scan limit, moreResults flag, etc), which is ok for > major/minor upgrade and we can tell users to upgrade server before upgrading > client. But RS also use scan to read meta. If meta table is in RS of old > version, all RSs of new version may have trouble while scanning meta table. > So we should make sure meta table always in servers of new version. Force > meta table in Master and upgrade Master first, or assign meta table in region > servers with latest version? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004119#comment-16004119 ] Hudson commented on HBASE-8758: --- FAILURE: Integrated in Jenkins build HBase-1.4 #728 (See [https://builds.apache.org/job/HBase-1.4/728/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev ea89047abf20b1dbf55dd1fa758a5545984157be) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17917) Use pread by default for all user scan and switch to streaming read if needed
[ https://issues.apache.org/jira/browse/HBASE-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004099#comment-16004099 ] Duo Zhang commented on HBASE-17917: --- {quote} Can stream open be done in background? We keep preading till NN comes back? (Can be new issue). {quote} Yes, this is an important optimization. Can do it in a follow on issue. Thanks sir. > Use pread by default for all user scan and switch to streaming read if needed > - > > Key: HBASE-17917 > URL: https://issues.apache.org/jira/browse/HBASE-17917 > Project: HBase > Issue Type: Sub-task > Components: scan >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17917.patch, HBASE-17917-v1.patch, > HBASE-17917-v2.patch, HBASE-17917-v2.patch, HBASE-17917-v3.patch, > HBASE-17917-v4.patch, HBASE-17917-v5.patch > > > As said in the parent issue. We need some benchmark here first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17917) Use pread by default for all user scan and switch to streaming read if needed
[ https://issues.apache.org/jira/browse/HBASE-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004079#comment-16004079 ] stack commented on HBASE-17917: --- [~Apache9] Skimmed latest. It looks great. Nice cleanup. I like the making stuff private. [~lhofhansl] ! bq. and if the kvs we scanned reaches this limit, we will reopen the scanner with stream. Can stream open be done in background? We keep preading till NN comes back? (Can be new issue). +1 from me. > Use pread by default for all user scan and switch to streaming read if needed > - > > Key: HBASE-17917 > URL: https://issues.apache.org/jira/browse/HBASE-17917 > Project: HBase > Issue Type: Sub-task > Components: scan >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17917.patch, HBASE-17917-v1.patch, > HBASE-17917-v2.patch, HBASE-17917-v2.patch, HBASE-17917-v3.patch, > HBASE-17917-v4.patch, HBASE-17917-v5.patch > > > As said in the parent issue. We need some benchmark here first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners
[ https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004048#comment-16004048 ] Chia-Ping Tsai commented on HBASE-18019: I will fix this after resolving HBASE-17887. Otherwise, the TestAcid* will fail and confuse us. > Clear redundant memstore scanners > - > > Key: HBASE-18019 > URL: https://issues.apache.org/jira/browse/HBASE-18019 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > > The HBASE-17655 remove the MemStoreScanner and it causes that the > MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of > active, snapshot and pipeline. But StoreScanner only remove one mem scanner > when refreshing current scanners. > {code} > for (int i = 0; i < currentScanners.size(); i++) { > if (!currentScanners.get(i).isFileScanner()) { > currentScanners.remove(i); > break; > } > } > {code} > The older scanners kept in the StoreScanner will hinder GC from releasing > memory and lead to multiple scans on the same data. > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HBASE-18019) Clear redundant memstore scanners
[ https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004027#comment-16004027 ] ramkrishna.s.vasudevan edited comment on HBASE-18019 at 5/10/17 4:29 AM: - Yeah when this code was written we only had the DefaultMemstore and also HBASE-17655 was not there. Yes we need to fix it. was (Author: ram_krish): Yeah when this code was return we only had the DefaultMemstore and also HBASE-17655 was not there. Yes we need to fix it. > Clear redundant memstore scanners > - > > Key: HBASE-18019 > URL: https://issues.apache.org/jira/browse/HBASE-18019 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > > The HBASE-17655 remove the MemStoreScanner and it causes that the > MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of > active, snapshot and pipeline. But StoreScanner only remove one mem scanner > when refreshing current scanners. > {code} > for (int i = 0; i < currentScanners.size(); i++) { > if (!currentScanners.get(i).isFileScanner()) { > currentScanners.remove(i); > break; > } > } > {code} > The older scanners kept in the StoreScanner will hinder GC from releasing > memory and lead to multiple scans on the same data. > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17887) TestAcidGuarantees fails frequently
[ https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004031#comment-16004031 ] ramkrishna.s.vasudevan commented on HBASE-17887: bq.We could get rid of ticker by passing a list of memstoreScanners on to ChangedReadersObserver. Ok fine. bq.ardon me, could you tell me more details? What i meant is that now we have {code} default List getScanners(List files, boolean cacheBlocks, boolean isGet, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow, long readPt, boolean includeMemstoreScanner) throws IOException { {code} So to this we should also pass the memstoreScanners now right? Instead of the ticket? > TestAcidGuarantees fails frequently > --- > > Key: HBASE-17887 > URL: https://issues.apache.org/jira/browse/HBASE-17887 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Umesh Agashe >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.4.1 > > Attachments: HBASE-17887.branch-1.v0.patch, > HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, > HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, > HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.ut.patch, HBASE-17887.v0.patch, HBASE-17887.v1.patch, > HBASE-17887.v2.patch, HBASE-17887.v3.patch > > > As per the flaky tests dashboard here: > https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html, > It fails 30% of the time. > While working on HBASE-17863, a few verification builds on patch failed due > to TestAcidGuarantees didn't pass. IMHO, the changes for HBASE-17863 are > unlikely to affect get/ put path. > I ran the test with and without the patch several times locally and found > that TestAcidGuarantees fails without the patch similar number of times. > Opening blocker, considering acid guarantees are critical to HBase. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004032#comment-16004032 ] Anoop Sam John commented on HBASE-16993: You can configure the bucket sizes right? The default one is with 4KB, 8, 16, 32... It has to be multiple of 256 . Thats is right? So max waste will be 255 bytes. That much granular user can config right? Sorry am not getting why u think 18 KB waste. Even in default sizes, we have 48 KB. So for this 46 KB, that will be used. Ya 2 KB waste. But one can change size no? (Multiple of 256 way) > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, > HBASE-16993.master.005.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbas
[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners
[ https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004027#comment-16004027 ] ramkrishna.s.vasudevan commented on HBASE-18019: Yeah when this code was return we only had the DefaultMemstore and also HBASE-17655 was not there. Yes we need to fix it. > Clear redundant memstore scanners > - > > Key: HBASE-18019 > URL: https://issues.apache.org/jira/browse/HBASE-18019 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > > The HBASE-17655 remove the MemStoreScanner and it causes that the > MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of > active, snapshot and pipeline. But StoreScanner only remove one mem scanner > when refreshing current scanners. > {code} > for (int i = 0; i < currentScanners.size(); i++) { > if (!currentScanners.get(i).isFileScanner()) { > currentScanners.remove(i); > break; > } > } > {code} > The older scanners kept in the StoreScanner will hinder GC from releasing > memory and lead to multiple scans on the same data. > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004018#comment-16004018 ] Hudson commented on HBASE-8758: --- SUCCESS: Integrated in Jenkins build HBase-1.3-IT #41 (See [https://builds.apache.org/job/HBase-1.3-IT/41/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev 286394ba636f3be0f27ced00d58227b48ff290e8) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004004#comment-16004004 ] Hudson commented on HBASE-8758: --- SUCCESS: Integrated in Jenkins build HBase-1.2-IT #867 (See [https://builds.apache.org/job/HBase-1.2-IT/867/]) HBASE-8758 Error in RegionCoprocessorHost class preScanner method (chia7712: rev b6d1b19a3e5a645c7906fb00870775598f9d1514) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-8758: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 1.2.7 1.3.2 1.4.0 2.0.0 Status: Resolved (was: Patch Available) > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7 > > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-8758) Error in RegionCoprocessorHost class preScanner method documentation.
[ https://issues.apache.org/jira/browse/HBASE-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003979#comment-16003979 ] Chia-Ping Tsai commented on HBASE-8758: --- Will commit it later. FYI [~yuzhih...@gmail.com] > Error in RegionCoprocessorHost class preScanner method documentation. > - > > Key: HBASE-8758 > URL: https://issues.apache.org/jira/browse/HBASE-8758 > Project: HBase > Issue Type: Bug > Components: Coprocessors, documentation >Affects Versions: 0.98.0, 0.95.2, 0.94.9 > Environment: Any. Actually it is just wrong comment in code. >Reporter: Roman Nikitchenko >Priority: Minor > Labels: beginner, comments, coprocessors, documentation > Attachments: 8758-r1545168.patch, HBASE-8758-r1545168.patch, > HBASE-8758-r1545168.patch > > > preScannerOpen() method of RegionCoprocessorHost class is documented to > return 'false' value in negative case (default operation should not be > bypassed). Actual implementation returns 'null' value. > Proposed solution is just to correct comment to match existing implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17938) General fault - tolerance framework for backup/restore operations
[ https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003970#comment-16003970 ] Ted Yu commented on HBASE-17938: For XXTableBackupClient, you can make the failStageIf() no op. In test(s), create test client which extends XXTableBackupClient with failStageIf() that does fault injection. This would reduce code duplication. > General fault - tolerance framework for backup/restore operations > - > > Key: HBASE-17938 > URL: https://issues.apache.org/jira/browse/HBASE-17938 > Project: HBase > Issue Type: Sub-task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch, > HBASE-17938-v3.patch, HBASE-17938-v4.patch, HBASE-17938-v5.patch, > HBASE-17938-v6.patch > > > The framework must take care of all general types of failures during backup/ > restore and restore system to the original state in case of a failure. > That won't solve all the possible issues but we have a separate JIRAs for > them as a sub-tasks of HBASE-15277 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18012) Move RpcServer.Connection to a separated file
[ https://issues.apache.org/jira/browse/HBASE-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-18012: -- Attachment: HBASE-18012-v1.patch > Move RpcServer.Connection to a separated file > - > > Key: HBASE-18012 > URL: https://issues.apache.org/jira/browse/HBASE-18012 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-18012.patch, HBASE-18012-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003940#comment-16003940 ] liubangchen commented on HBASE-16993: - In my personal opinion use offset as long more better,if a bucket is 64kb but datablock only 46kb then will waste 18kb > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, > HBASE-16993.master.005.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at org.apache.hadoop.hbase.regionse
[jira] [Commented] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
[ https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003926#comment-16003926 ] Dima Spivak commented on HBASE-18020: - Some thoughts from a brief glance: 1. Can you put this up on Apache's Review Board? 2. Yay for using code from other projects, but are we sure we don't want to use Python 3 for this? Lots of lines of code in this patch could be removed/the whole thing made significantly more readable with some of the constructs introduced in Python 3 and, seeing as how we control our test environment with Docker and the like, there's no compelling reason not to use it. (Also, Python 3 has been out for ten years. It's time to use it.) > Update API Compliance Checker to Incorporate Improvements Done in Hadoop > > > Key: HBASE-18020 > URL: https://issues.apache.org/jira/browse/HBASE-18020 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Alex Leblang >Assignee: Alex Leblang > Fix For: 2.0.0 > > Attachments: HBASE-18020.0.patch > > > Recently the Hadoop community has made a number of improvements in their api > compliance checker based on feedback from the hbase and kudu community. We > should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HBASE-18012) Move RpcServer.Connection to a separated file
[ https://issues.apache.org/jira/browse/HBASE-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-18012: - Assignee: Duo Zhang > Move RpcServer.Connection to a separated file > - > > Key: HBASE-18012 > URL: https://issues.apache.org/jira/browse/HBASE-18012 > Project: HBase > Issue Type: Sub-task > Components: IPC/RPC >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-18012.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003905#comment-16003905 ] Zheng Hu commented on HBASE-11013: -- Sorry for my mistake, SnapshotDescription with an optional field would be a better solution for it . Thanks for reminding. Will upload addendum for it. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17928) Shell tool to clear compaction queues
[ https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17928: --- Summary: Shell tool to clear compaction queues (was: Shell tool to clear compact queues) > Shell tool to clear compaction queues > - > > Key: HBASE-17928 > URL: https://issues.apache.org/jira/browse/HBASE-17928 > Project: HBase > Issue Type: New Feature > Components: Compaction, Operability >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng > Fix For: 2.0.0 > > Attachments: 17928-v5.patch, HBASE-17928-branch-1-v1.patch, > HBASE-17928-branch-1-v2.patch, HBASE-17928-v1.patch, HBASE-17928-v2.patch, > HBASE-17928-v3.patch, HBASE-17928-v4.patch, HBASE-17928-v5.patch > > > scenarioļ¼ > 1. Compact a table by mistake > 2. Compact is not completed within the specified time period > In this case, clearing the queue is a better choice, so as not to affect the > stability of the cluster -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18019) Clear redundant memstore scanners
[ https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003894#comment-16003894 ] Duo Zhang commented on HBASE-18019: --- Yeah I also found this problem when implementing HBASE-17917 but I haven't analyzed it deeply yet. > Clear redundant memstore scanners > - > > Key: HBASE-18019 > URL: https://issues.apache.org/jira/browse/HBASE-18019 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > > The HBASE-17655 remove the MemStoreScanner and it causes that the > MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of > active, snapshot and pipeline. But StoreScanner only remove one mem scanner > when refreshing current scanners. > {code} > for (int i = 0; i < currentScanners.size(); i++) { > if (!currentScanners.get(i).isFileScanner()) { > currentScanners.remove(i); > break; > } > } > {code} > The older scanners kept in the StoreScanner will hinder GC from releasing > memory and lead to multiple scans on the same data. > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17917) Use pread by default for all user scan and switch to streaming read if needed
[ https://issues.apache.org/jira/browse/HBASE-17917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003893#comment-16003893 ] Duo Zhang commented on HBASE-17917: --- Ping [~lhofhansl] [~stack]. Thanks. > Use pread by default for all user scan and switch to streaming read if needed > - > > Key: HBASE-17917 > URL: https://issues.apache.org/jira/browse/HBASE-17917 > Project: HBase > Issue Type: Sub-task > Components: scan >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17917.patch, HBASE-17917-v1.patch, > HBASE-17917-v2.patch, HBASE-17917-v2.patch, HBASE-17917-v3.patch, > HBASE-17917-v4.patch, HBASE-17917-v5.patch > > > As said in the parent issue. We need some benchmark here first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003890#comment-16003890 ] Duo Zhang commented on HBASE-11013: --- +1 on [~zghaobac]'s comment. Let's add an optional field to SnapshotDescription? Thanks. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues
[ https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003887#comment-16003887 ] stack commented on HBASE-17928: --- +1. Operator ask > Shell tool to clear compact queues > -- > > Key: HBASE-17928 > URL: https://issues.apache.org/jira/browse/HBASE-17928 > Project: HBase > Issue Type: New Feature > Components: Compaction, Operability >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng > Fix For: 2.0.0 > > Attachments: 17928-v5.patch, HBASE-17928-branch-1-v1.patch, > HBASE-17928-branch-1-v2.patch, HBASE-17928-v1.patch, HBASE-17928-v2.patch, > HBASE-17928-v3.patch, HBASE-17928-v4.patch, HBASE-17928-v5.patch > > > scenarioļ¼ > 1. Compact a table by mistake > 2. Compact is not completed within the specified time period > In this case, clearing the queue is a better choice, so as not to affect the > stability of the cluster -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003865#comment-16003865 ] Guanghao Zhang commented on HBASE-11013: We use protobuf for snapshot description and write the pb message to a .snapshotinfo file. So we can add the permissions to the snapshot description protobuf directly. And don't need to write a additional file named .aclinfo? Thanks. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18017) Reduce frequency of setStoragePolicy failure warnings
[ https://issues.apache.org/jira/browse/HBASE-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003857#comment-16003857 ] Hudson commented on HBASE-18017: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2981 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2981/]) HBASE-18017 Reduce frequency of setStoragePolicy failure warnings (apurtell: rev c38bf12444aca77c7cb12637147c07dc711acbe9) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java > Reduce frequency of setStoragePolicy failure warnings > - > > Key: HBASE-18017 > URL: https://issues.apache.org/jira/browse/HBASE-18017 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-18017.patch, HBASE-18017.patch > > > When running with storage policy specification support if the underlying HDFS > doesn't support it or if it has been disabled in site configuration the > resulting logging is excessive. Log at WARN level once per FileSystem > instance. Otherwise, log messages at DEBUG level. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-15199) Move jruby jar so only on hbase-shell module classpath; currently globally available
[ https://issues.apache.org/jira/browse/HBASE-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003858#comment-16003858 ] Hudson commented on HBASE-15199: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2981 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2981/]) HBASE-15199 (addendum) - When JRUBY_HOME is specified, update CLASSPATH (busbey: rev b67f6fecc173ff1272284f3e47f95d493fab331d) * (edit) bin/hbase * (edit) bin/hbase.cmd > Move jruby jar so only on hbase-shell module classpath; currently globally > available > > > Key: HBASE-15199 > URL: https://issues.apache.org/jira/browse/HBASE-15199 > Project: HBase > Issue Type: Task > Components: dependencies, jruby, shell >Reporter: stack >Assignee: Xiang Li >Priority: Critical > Fix For: 2.0.0 > > Attachments: 15199.txt, HBASE-15199-addendum.master.000.patch, > HBASE-15199.master.001.patch, HBASE-15199.master.002.patch, > HBASE-15199.master.003.patch > > > A suggestion that came up out of internal issue (filed by Mr Jan Van Besien) > was to move the scope of the jruby include down so it is only a dependency > for the hbase-shell. jruby jar brings in a bunch of dependencies (joda time > for example) which can clash with the includes of others. Our Sean suggests > that could be good to shut down exploit possibilities if jruby was not > globally available. Only downside I can think is that it may no longer be > available to our bin/*rb scripts if we move the jar but perhaps these can be > changed so they can find the ruby jar in new location. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003839#comment-16003839 ] Hadoop QA commented on HBASE-18021: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 54m 44s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 17s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 24s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867212/HBASE-18021-master-002.patch | | JIRA Issue | HBASE-18021 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 3ec6213de168 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / b67f6fe | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/6743/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/6743/testReport/ | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6743/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Add more info in timed out RetriesExhaustedException for read replica client > get processing, >
[jira] [Comment Edited] (HBASE-17938) General fault - tolerance framework for backup/restore operations
[ https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003833#comment-16003833 ] Vladimir Rodionov edited comment on HBASE-17938 at 5/10/17 12:41 AM: - We have discussed this already. If some step fail during failBackup execution, user will be notified of a failure and advised to run repair tool manually. I will fix the wording of IOException in case if operation fails in repair phase (failBackup) Can you comment this on RB, [~tedyu]? was (Author: vrodionov): We have discussed this already. If some step fail during failBackup execution, user will be notified of a failure and advised to run repair tool manually. I will fix the wording of IOException in case if operation fails in repair phase (failBackup) > General fault - tolerance framework for backup/restore operations > - > > Key: HBASE-17938 > URL: https://issues.apache.org/jira/browse/HBASE-17938 > Project: HBase > Issue Type: Sub-task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch, > HBASE-17938-v3.patch, HBASE-17938-v4.patch, HBASE-17938-v5.patch, > HBASE-17938-v6.patch > > > The framework must take care of all general types of failures during backup/ > restore and restore system to the original state in case of a failure. > That won't solve all the possible issues but we have a separate JIRAs for > them as a sub-tasks of HBASE-15277 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17938) General fault - tolerance framework for backup/restore operations
[ https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003833#comment-16003833 ] Vladimir Rodionov commented on HBASE-17938: --- We have discussed this already. If some step fail during failBackup execution, user will be notified of a failure and advised to run repair tool manually. I will fix the wording of IOException in case if operation fails in repair phase (failBackup) > General fault - tolerance framework for backup/restore operations > - > > Key: HBASE-17938 > URL: https://issues.apache.org/jira/browse/HBASE-17938 > Project: HBase > Issue Type: Sub-task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch, > HBASE-17938-v3.patch, HBASE-17938-v4.patch, HBASE-17938-v5.patch, > HBASE-17938-v6.patch > > > The framework must take care of all general types of failures during backup/ > restore and restore system to the original state in case of a failure. > That won't solve all the possible issues but we have a separate JIRAs for > them as a sub-tasks of HBASE-15277 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17938) General fault - tolerance framework for backup/restore operations
[ https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003820#comment-16003820 ] Ted Yu commented on HBASE-17938: In cleanupAndRestoreBackupSystem(), {code} if (type == BackupType.FULL) { deleteSnapshots(conn, backupInfo, conf); cleanupExportSnapshotLog(conf); } restoreBackupTable(conn, conf); {code} What if deleteSnapshots() throws exception ? restoreBackupTable() would be skipped. > General fault - tolerance framework for backup/restore operations > - > > Key: HBASE-17938 > URL: https://issues.apache.org/jira/browse/HBASE-17938 > Project: HBase > Issue Type: Sub-task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch, > HBASE-17938-v3.patch, HBASE-17938-v4.patch, HBASE-17938-v5.patch, > HBASE-17938-v6.patch > > > The framework must take care of all general types of failures during backup/ > restore and restore system to the original state in case of a failure. > That won't solve all the possible issues but we have a separate JIRAs for > them as a sub-tasks of HBASE-15277 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-15199) Move jruby jar so only on hbase-shell module classpath; currently globally available
[ https://issues.apache.org/jira/browse/HBASE-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003805#comment-16003805 ] Xiang Li commented on HBASE-15199: -- Thanks everyone for the review! > Move jruby jar so only on hbase-shell module classpath; currently globally > available > > > Key: HBASE-15199 > URL: https://issues.apache.org/jira/browse/HBASE-15199 > Project: HBase > Issue Type: Task > Components: dependencies, jruby, shell >Reporter: stack >Assignee: Xiang Li >Priority: Critical > Fix For: 2.0.0 > > Attachments: 15199.txt, HBASE-15199-addendum.master.000.patch, > HBASE-15199.master.001.patch, HBASE-15199.master.002.patch, > HBASE-15199.master.003.patch > > > A suggestion that came up out of internal issue (filed by Mr Jan Van Besien) > was to move the scope of the jruby include down so it is only a dependency > for the hbase-shell. jruby jar brings in a bunch of dependencies (joda time > for example) which can clash with the includes of others. Our Sean suggests > that could be good to shut down exploit possibilities if jruby was not > globally available. Only downside I can think is that it may no longer be > available to our bin/*rb scripts if we move the jar but perhaps these can be > changed so they can find the ruby jar in new location. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003771#comment-16003771 ] Hadoop QA commented on HBASE-18021: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 6s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 55m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 46s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867199/HBASE-18021-master-001.patch | | JIRA Issue | HBASE-18021 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 4ac427450f00 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / b67f6fe | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/6742/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/6742/testReport/ | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6742/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > -
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003759#comment-16003759 ] Zheng Hu commented on HBASE-11013: -- Fine, will upload patch for branch-1 later. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003755#comment-16003755 ] Zach York commented on HBASE-18021: --- +1 LGTM > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > - > > Key: HBASE-18021 > URL: https://issues.apache.org/jira/browse/HBASE-18021 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Minor > Attachments: HBASE-18021-master-001.patch, > HBASE-18021-master-002.patch > > > Right now, when the client does not receive results from replica servers > within configured timeout period, the client does not print out info which > helps to understand/identify the cause. Please see > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 > More info needs to be filled in the exception so it helps to pinpoint the > root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-18021: - Attachment: HBASE-18021-master-002.patch V2 addressed Zach's comments. > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > - > > Key: HBASE-18021 > URL: https://issues.apache.org/jira/browse/HBASE-18021 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Minor > Attachments: HBASE-18021-master-001.patch, > HBASE-18021-master-002.patch > > > Right now, when the client does not receive results from replica servers > within configured timeout period, the client does not print out info which > helps to understand/identify the cause. Please see > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 > More info needs to be filled in the exception so it helps to pinpoint the > root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type
[ https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003717#comment-16003717 ] Hadoop QA commented on HBASE-17343: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 30m 57s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 126m 46s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 183m 44s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867170/HBASE-17343-V09.patch | | JIRA Issue | HBASE-17343 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 2a7d6b866c3e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / c38bf12 | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/6740/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6740/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Make Compacting Memstore default in 2.0 with BASIC as the default type > -- > > Key: HBASE-17343 > URL: https://issues.apache.org/jira/browse/HBASE-17343 > Project: HBase > Issue Type: New Feature > Compone
[jira] [Updated] (HBASE-17938) General fault - tolerance framework for backup/restore operations
[ https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17938: -- Attachment: HBASE-17938-v6.patch v6 > General fault - tolerance framework for backup/restore operations > - > > Key: HBASE-17938 > URL: https://issues.apache.org/jira/browse/HBASE-17938 > Project: HBase > Issue Type: Sub-task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch, > HBASE-17938-v3.patch, HBASE-17938-v4.patch, HBASE-17938-v5.patch, > HBASE-17938-v6.patch > > > The framework must take care of all general types of failures during backup/ > restore and restore system to the original state in case of a failure. > That won't solve all the possible issues but we have a separate JIRAs for > them as a sub-tasks of HBASE-15277 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003699#comment-16003699 ] huaxiang sun commented on HBASE-18021: -- Thanks [~zyork], will upload a new patch with fixed capitalization. > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > - > > Key: HBASE-18021 > URL: https://issues.apache.org/jira/browse/HBASE-18021 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Minor > Attachments: HBASE-18021-master-001.patch > > > Right now, when the client does not receive results from replica servers > within configured timeout period, the client does not print out info which > helps to understand/identify the cause. Please see > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 > More info needs to be filled in the exception so it helps to pinpoint the > root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003688#comment-16003688 ] Zach York commented on HBASE-18021: --- Minor: While you are fixing this, please fix the capitalization. bq. throw new RetriesExhaustedException("timed out after " throw new RetriesExhaustedException("Timed out after " Otherwise LGTM! > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > - > > Key: HBASE-18021 > URL: https://issues.apache.org/jira/browse/HBASE-18021 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Minor > Attachments: HBASE-18021-master-001.patch > > > Right now, when the client does not receive results from replica servers > within configured timeout period, the client does not print out info which > helps to understand/identify the cause. Please see > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 > More info needs to be filled in the exception so it helps to pinpoint the > root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-18021: - Status: Patch Available (was: Open) > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > - > > Key: HBASE-18021 > URL: https://issues.apache.org/jira/browse/HBASE-18021 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Minor > Attachments: HBASE-18021-master-001.patch > > > Right now, when the client does not receive results from replica servers > within configured timeout period, the client does not print out info which > helps to understand/identify the cause. Please see > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 > More info needs to be filled in the exception so it helps to pinpoint the > root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
[ https://issues.apache.org/jira/browse/HBASE-18021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun updated HBASE-18021: - Attachment: HBASE-18021-master-001.patch > Add more info in timed out RetriesExhaustedException for read replica client > get processing, > - > > Key: HBASE-18021 > URL: https://issues.apache.org/jira/browse/HBASE-18021 > Project: HBase > Issue Type: Improvement > Components: Client >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun >Priority: Minor > Attachments: HBASE-18021-master-001.patch > > > Right now, when the client does not receive results from replica servers > within configured timeout period, the client does not print out info which > helps to understand/identify the cause. Please see > https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 > More info needs to be filled in the exception so it helps to pinpoint the > root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18021) Add more info in timed out RetriesExhaustedException for read replica client get processing,
huaxiang sun created HBASE-18021: Summary: Add more info in timed out RetriesExhaustedException for read replica client get processing, Key: HBASE-18021 URL: https://issues.apache.org/jira/browse/HBASE-18021 Project: HBase Issue Type: Improvement Components: Client Affects Versions: 2.0.0 Reporter: huaxiang sun Assignee: huaxiang sun Priority: Minor Right now, when the client does not receive results from replica servers within configured timeout period, the client does not print out info which helps to understand/identify the cause. Please see https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java#L212 More info needs to be filled in the exception so it helps to pinpoint the root cause quickly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
[ https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003648#comment-16003648 ] Hadoop QA commented on HBASE-18020: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 1s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 7s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} pylint {color} | {color:red} 0m 4s {color} | {color:red} The patch generated 288 new + 0 unchanged - 0 fixed = 288 total (was 0) {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 4s {color} | {color:green} The patch generated 0 new + 498 unchanged - 20 fixed = 498 total (was 518) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 20s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 59s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867191/HBASE-18020.0.patch | | JIRA Issue | HBASE-18020 | | Optional Tests | asflicense shellcheck shelldocs pylint | | uname | Linux b56e71d55dae 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / b67f6fe | | shellcheck | v0.4.6 | | pylint | v1.7.1 | | pylint | https://builds.apache.org/job/PreCommit-HBASE-Build/6741/artifact/patchprocess/diff-patch-pylint.txt | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6741/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Update API Compliance Checker to Incorporate Improvements Done in Hadoop > > > Key: HBASE-18020 > URL: https://issues.apache.org/jira/browse/HBASE-18020 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Alex Leblang >Assignee: Alex Leblang > Fix For: 2.0.0 > > Attachments: HBASE-18020.0.patch > > > Recently the Hadoop community has made a number of improvements in their api > compliance checker based on feedback from the hbase and kudu community. We > should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17928) Shell tool to clear compact queues
[ https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003607#comment-16003607 ] Hadoop QA commented on HBASE-17928: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s {color} | {color:blue} rubocop was not available. {color} | | {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s {color} | {color:blue} Ruby-lint was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 9s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 26m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 20s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 55s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 25m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 55m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 23s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 47s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 21s {color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 282m 13s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot | | | org.apache.hadoop.hbase.snapshot.TestMo
[jira] [Updated] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
[ https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Leblang updated HBASE-18020: - Attachment: HBASE-18020.0.patch > Update API Compliance Checker to Incorporate Improvements Done in Hadoop > > > Key: HBASE-18020 > URL: https://issues.apache.org/jira/browse/HBASE-18020 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Alex Leblang >Assignee: Alex Leblang > Fix For: 2.0.0 > > Attachments: HBASE-18020.0.patch > > > Recently the Hadoop community has made a number of improvements in their api > compliance checker based on feedback from the hbase and kudu community. We > should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
[ https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Leblang updated HBASE-18020: - Status: Patch Available (was: Open) > Update API Compliance Checker to Incorporate Improvements Done in Hadoop > > > Key: HBASE-18020 > URL: https://issues.apache.org/jira/browse/HBASE-18020 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Alex Leblang >Assignee: Alex Leblang > Fix For: 2.0.0 > > Attachments: HBASE-18020.0.patch > > > Recently the Hadoop community has made a number of improvements in their api > compliance checker based on feedback from the hbase and kudu community. We > should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-15199) Move jruby jar so only on hbase-shell module classpath; currently globally available
[ https://issues.apache.org/jira/browse/HBASE-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-15199: Resolution: Fixed Status: Resolved (was: Patch Available) > Move jruby jar so only on hbase-shell module classpath; currently globally > available > > > Key: HBASE-15199 > URL: https://issues.apache.org/jira/browse/HBASE-15199 > Project: HBase > Issue Type: Task > Components: dependencies, jruby, shell >Reporter: stack >Assignee: Xiang Li >Priority: Critical > Fix For: 2.0.0 > > Attachments: 15199.txt, HBASE-15199-addendum.master.000.patch, > HBASE-15199.master.001.patch, HBASE-15199.master.002.patch, > HBASE-15199.master.003.patch > > > A suggestion that came up out of internal issue (filed by Mr Jan Van Besien) > was to move the scope of the jruby include down so it is only a dependency > for the hbase-shell. jruby jar brings in a bunch of dependencies (joda time > for example) which can clash with the includes of others. Our Sean suggests > that could be good to shut down exploit possibilities if jruby was not > globally available. Only downside I can think is that it may no longer be > available to our bin/*rb scripts if we move the jar but perhaps these can be > changed so they can find the ruby jar in new location. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
[ https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey reassigned HBASE-18020: --- Assignee: Alex Leblang > Update API Compliance Checker to Incorporate Improvements Done in Hadoop > > > Key: HBASE-18020 > URL: https://issues.apache.org/jira/browse/HBASE-18020 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Alex Leblang >Assignee: Alex Leblang > Fix For: 2.0.0 > > > Recently the Hadoop community has made a number of improvements in their api > compliance checker based on feedback from the hbase and kudu community. We > should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
[ https://issues.apache.org/jira/browse/HBASE-18020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-18020: Fix Version/s: 2.0.0 > Update API Compliance Checker to Incorporate Improvements Done in Hadoop > > > Key: HBASE-18020 > URL: https://issues.apache.org/jira/browse/HBASE-18020 > Project: HBase > Issue Type: Improvement > Components: API, community >Reporter: Alex Leblang > Fix For: 2.0.0 > > > Recently the Hadoop community has made a number of improvements in their api > compliance checker based on feedback from the hbase and kudu community. We > should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18020) Update API Compliance Checker to Incorporate Improvements Done in Hadoop
Alex Leblang created HBASE-18020: Summary: Update API Compliance Checker to Incorporate Improvements Done in Hadoop Key: HBASE-18020 URL: https://issues.apache.org/jira/browse/HBASE-18020 Project: HBase Issue Type: Improvement Components: API, community Reporter: Alex Leblang Recently the Hadoop community has made a number of improvements in their api compliance checker based on feedback from the hbase and kudu community. We should adopt these changes ourselves. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17938) General fault - tolerance framework for backup/restore operations
[ https://issues.apache.org/jira/browse/HBASE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17938: -- Attachment: HBASE-17938-v5.patch v5 addresses some RB comments. cc: [~tedyu] > General fault - tolerance framework for backup/restore operations > - > > Key: HBASE-17938 > URL: https://issues.apache.org/jira/browse/HBASE-17938 > Project: HBase > Issue Type: Sub-task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17938-v1.patch, HBASE-17938-v2.patch, > HBASE-17938-v3.patch, HBASE-17938-v4.patch, HBASE-17938-v5.patch > > > The framework must take care of all general types of failures during backup/ > restore and restore system to the original state in case of a failure. > That won't solve all the possible issues but we have a separate JIRAs for > them as a sub-tasks of HBASE-15277 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16356) REST API scanner: row prefix filter and custom filter parameters are mutually exclusive
[ https://issues.apache.org/jira/browse/HBASE-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003432#comment-16003432 ] Hudson commented on HBASE-16356: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2980 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2980/]) HBASE-16356 REST API scanner: row prefix filter and custom filter (tedyu: rev ac1024af213158d6528ffec964f2bf4aadd9ccd3) * (edit) hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java * (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java > REST API scanner: row prefix filter and custom filter parameters are mutually > exclusive > --- > > Key: HBASE-16356 > URL: https://issues.apache.org/jira/browse/HBASE-16356 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.1.2 > Environment: Not environment specific (tested on HDP 2.4.2) >Reporter: Bjorn Olsen >Assignee: Ben Watson >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: TableResource-HBASE-16356.patch > > > A user can optionally specify a row PrefixFilter, or a list of custom > filters, to the REST API scanner. > Prefix filter example: > /123*?startrow=0&endrow=9 > Custom filters example: > /*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > This works when specified separately, like above. > However, specifying both a prefix filter and a list of custom filters causes > the API to ignore the prefix filter. > Example using both parameters: > /123*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > It appears that code in the TableResource.getScanResource function is causing > this issue as follows: > (see > https://hbase.apache.org/devapidocs/src-html/org/apache/hadoop/hbase/rest/TableResource.html#line.196 > ) > if (filterList != null) { > tableScan.setFilter(filterList); /*comes from custom filters parameter*/ > } else if (filter != null) { > tableScan.setFilter(filter); > /*comes from row prefix parameter*/ > } > This should probably be changed to use a single filterList for both > parameters. The prefix filter can be "Popped" onto the filter list and then > these parameters will work even when both are specified. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003433#comment-16003433 ] Hudson commented on HBASE-11013: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2980 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2980/]) HBASE-11013: Clone Snapshots on Secure Cluster Should provide option to (tedyu: rev 951b23a44cd90ae4afed9b255de0e678fbfba946) * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java * (edit) hbase-protocol-shaded/src/main/protobuf/Master.proto * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java * (edit) hbase-shell/src/main/ruby/hbase_constants.rb * (edit) hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/MasterProtos.java * (edit) hbase-shell/src/main/ruby/shell/commands/clone_snapshot.rb * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java * (add) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotWithAcl.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CloneSnapshotProcedure.java * (edit) hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestReplicationShell.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java * (edit) hbase-shell/src/main/ruby/hbase/admin.rb > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003425#comment-16003425 ] Ted Yu commented on HBASE-11013: In branch-1, we have CloneSnapshotHandler in place of CloneSnapshotProcedure. Applying the master branch patch resulted in many conflicts. Zheng: See if you have time to do the backport. Thanks > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17343) Make Compacting Memstore default in 2.0 with BASIC as the default type
[ https://issues.apache.org/jira/browse/HBASE-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anastasia Braginsky updated HBASE-17343: Attachment: HBASE-17343-V09.patch > Make Compacting Memstore default in 2.0 with BASIC as the default type > -- > > Key: HBASE-17343 > URL: https://issues.apache.org/jira/browse/HBASE-17343 > Project: HBase > Issue Type: New Feature > Components: regionserver >Affects Versions: 2.0.0 >Reporter: ramkrishna.s.vasudevan >Assignee: Anastasia Braginsky >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17343-V01.patch, HBASE-17343-V02.patch, > HBASE-17343-V04.patch, HBASE-17343-V05.patch, HBASE-17343-V06.patch, > HBASE-17343-V07.patch, HBASE-17343-V08.patch, HBASE-17343-V09.patch > > > FYI [~anastas], [~eshcar] and [~ebortnik]. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003375#comment-16003375 ] Andrew Purtell commented on HBASE-11013: This looks useful for branch-1. Anyone interested in doing a backport? > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17887) TestAcidGuarantees fails frequently
[ https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003343#comment-16003343 ] Chia-Ping Tsai commented on HBASE-17887: bq. One existing problem was that I think this scanner was not getting closed and only getting removed. see HBASE-18019 > TestAcidGuarantees fails frequently > --- > > Key: HBASE-17887 > URL: https://issues.apache.org/jira/browse/HBASE-17887 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Umesh Agashe >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.4.1 > > Attachments: HBASE-17887.branch-1.v0.patch, > HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, > HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, > HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.ut.patch, HBASE-17887.v0.patch, HBASE-17887.v1.patch, > HBASE-17887.v2.patch, HBASE-17887.v3.patch > > > As per the flaky tests dashboard here: > https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html, > It fails 30% of the time. > While working on HBASE-17863, a few verification builds on patch failed due > to TestAcidGuarantees didn't pass. IMHO, the changes for HBASE-17863 are > unlikely to affect get/ put path. > I ran the test with and without the patch several times locally and found > that TestAcidGuarantees fails without the patch similar number of times. > Opening blocker, considering acid guarantees are critical to HBase. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18019) Clear redundant memstore scanners
[ https://issues.apache.org/jira/browse/HBASE-18019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18019: --- Summary: Clear redundant memstore scanners (was: Clear redundant memstore scanner) > Clear redundant memstore scanners > - > > Key: HBASE-18019 > URL: https://issues.apache.org/jira/browse/HBASE-18019 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > > The HBASE-17655 remove the MemStoreScanner and it causes that the > MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of > active, snapshot and pipeline. But StoreScanner only remove one mem scanner > when refreshing current scanners. > {code} > for (int i = 0; i < currentScanners.size(); i++) { > if (!currentScanners.get(i).isFileScanner()) { > currentScanners.remove(i); > break; > } > } > {code} > The older scanners kept in the StoreScanner will hinder GC from releasing > memory and lead to multiple scans on the same data. > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18019) Clear redundant memstore scanner
Chia-Ping Tsai created HBASE-18019: -- Summary: Clear redundant memstore scanner Key: HBASE-18019 URL: https://issues.apache.org/jira/browse/HBASE-18019 Project: HBase Issue Type: Improvement Affects Versions: 2.0.0 Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai Fix For: 2.0.0 The HBASE-17655 remove the MemStoreScanner and it causes that the MemStore#getScanner(readpt) returns multi KeyValueScanner which consist of active, snapshot and pipeline. But StoreScanner only remove one mem scanner when refreshing current scanners. {code} for (int i = 0; i < currentScanners.size(); i++) { if (!currentScanners.get(i).isFileScanner()) { currentScanners.remove(i); break; } } {code} The older scanners kept in the StoreScanner will hinder GC from releasing memory and lead to multiple scans on the same data. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18017) Reduce frequency of setStoragePolicy failure warnings
[ https://issues.apache.org/jira/browse/HBASE-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-18017: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0 Status: Resolved (was: Patch Available) > Reduce frequency of setStoragePolicy failure warnings > - > > Key: HBASE-18017 > URL: https://issues.apache.org/jira/browse/HBASE-18017 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-18017.patch, HBASE-18017.patch > > > When running with storage policy specification support if the underlying HDFS > doesn't support it or if it has been disabled in site configuration the > resulting logging is excessive. Log at WARN level once per FileSystem > instance. Otherwise, log messages at DEBUG level. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-18017) Reduce frequency of setStoragePolicy failure warnings
[ https://issues.apache.org/jira/browse/HBASE-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-18017: --- Attachment: HBASE-18017.patch > Reduce frequency of setStoragePolicy failure warnings > - > > Key: HBASE-18017 > URL: https://issues.apache.org/jira/browse/HBASE-18017 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Attachments: HBASE-18017.patch, HBASE-18017.patch > > > When running with storage policy specification support if the underlying HDFS > doesn't support it or if it has been disabled in site configuration the > resulting logging is excessive. Log at WARN level once per FileSystem > instance. Otherwise, log messages at DEBUG level. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18017) Reduce frequency of setStoragePolicy failure warnings
[ https://issues.apache.org/jira/browse/HBASE-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003212#comment-16003212 ] Andrew Purtell commented on HBASE-18017: [~carp84] Thanks for the review. I can make "Set storagePolicy=xxx for path=xxx" DEBUG level instead of TRACE. Yes, it is frequently logged. > Reduce frequency of setStoragePolicy failure warnings > - > > Key: HBASE-18017 > URL: https://issues.apache.org/jira/browse/HBASE-18017 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: Andrew Purtell >Assignee: Andrew Purtell >Priority: Minor > Attachments: HBASE-18017.patch > > > When running with storage policy specification support if the underlying HDFS > doesn't support it or if it has been disabled in site configuration the > resulting logging is excessive. Log at WARN level once per FileSystem > instance. Otherwise, log messages at DEBUG level. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17874) Limiting of read request response size based on block size may go wrong when blocks are read from onheap or off heap bucket cache
[ https://issues.apache.org/jira/browse/HBASE-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003182#comment-16003182 ] stack commented on HBASE-17874: --- Ok. How we know this fix is working though? > Limiting of read request response size based on block size may go wrong when > blocks are read from onheap or off heap bucket cache > - > > Key: HBASE-17874 > URL: https://issues.apache.org/jira/browse/HBASE-17874 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17874.patch > > > HBASE-14978 added this size limiting so as to make sure the multi read > requests do not retain two many blocks. This works well when the blocks are > obtained from any where other than memory mode BucketCache. In case of on > heap or off heap Bucket Cache, the entire cache area is split into N > ByteBuffers each of size 4 MB. When we hit a block in this cache, we no > longer do copy data into temp array. We use the same shared memory (BB). Its > capacity is 4 MB. > The block size accounting logic is RSRpcServices is like below > {code} > if (c instanceof ByteBufferCell) { > ByteBufferCell bbCell = (ByteBufferCell) c; > ByteBuffer bb = bbCell.getValueByteBuffer(); > if (bb != lastBlock) { > context.incrementResponseBlockSize(bb.capacity()); > lastBlock = bb; > } > } else { > // We're using the last block being the same as the current block as > // a proxy for pointing to a new block. This won't be exact. > // If there are multiple gets that bounce back and forth > // Then it's possible that this will over count the size of > // referenced blocks. However it's better to over count and > // use two rpcs than to OOME the regionserver. > byte[] valueArray = c.getValueArray(); > if (valueArray != lastBlock) { > context.incrementResponseBlockSize(valueArray.length); > lastBlock = valueArray; > } > } > {code} > We take the BBCell's value buffer and takes its capacity. The cell is backed > by the same BB that backs the HFileBlock. When the HFileBlock is created from > the BC, we do as below duplicating and proper positioning and limiting the BB > {code} > ByteBuffer bb = buffers[i].duplicate(); > if (i == startBuffer) { > cnt = bufferSize - startBufferOffset; > if (cnt > len) cnt = len; > bb.limit(startBufferOffset + cnt).position(startBufferOffset); > {code} > Still this BB's capacity is 4 MB. > This will make the size limit breach to happen too soon. What we expect is > block size defaults to 64 KB and so we here by allow cells from different > blocks to appear in response. We have a way to check whether we move from one > block to next. > {code} > if (bb != lastBlock) { > ... > lastBlock = bb; > } > {code} > But already just by considering the 1st cell, we added 4 MB size! -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17786) Create LoadBalancer perf-tests (test balancer algorithm decoupled from workload)
[ https://issues.apache.org/jira/browse/HBASE-17786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Umesh Agashe updated HBASE-17786: - Release Note: $ bin/hbase org.apache.hadoop.hbase.master.balancer.LoadBalancerPerformanceEvaluation -help usage: hbase org.apache.hadoop.hbase.master.balancer.LoadBalancerPerformanceEvaluation Options: -regions Number of regions to consider by load balancer. Default: 100 -servers Number of servers to consider by load balancer. Default: 1000 -load_balancerType of Load Balancer to use. Default: org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer > Create LoadBalancer perf-tests (test balancer algorithm decoupled from > workload) > > > Key: HBASE-17786 > URL: https://issues.apache.org/jira/browse/HBASE-17786 > Project: HBase > Issue Type: Sub-task > Components: Balancer, proc-v2 >Reporter: stack >Assignee: Umesh Agashe > Labels: beginner > Fix For: 2.0.0 > > Attachments: HBASE-17786.001.patch, HBASE-17786.002.patch > > > (Below is a quote from [~mbertozzi] taken from an internal issue that I'm > moving out here) > Add perf tools and keep monitored balancer performance (a BalancerPE-type > thing). > Most of the balancers should be instantiable without requiring a > mini-cluster, and it easy to create tons of RegionInfo and ServerNames with a > for loop. > The balancer is just creating a map RegionInfo:ServerName. > There are two methods to test roundRobinAssignment() and retainAssignment() > {code} > Map> roundRobinAssignment( > List regions, > List servers > ) throws HBaseIOException; > Map> retainAssignment( > Map regions, > List servers > ) throws HBaseIOException; > {code} > There are a bunch of obvious optimization that everyone can see just by > looking at the code. (like replacing array with set when we do > contains/remove operations). It will be nice to have a baseline and start > improving from there. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003144#comment-16003144 ] Hadoop QA commented on HBASE-16993: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s {color} | {color:red} HBASE-16993 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12843488/HBASE-16993.master.005.patch | | JIRA Issue | HBASE-16993 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/6739/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, > HBASE-16993.master.005.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScan
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003132#comment-16003132 ] Anoop Sam John commented on HBASE-16993: A change in mind. When we have file backed BC, the size of it can be really large. There are usages of BC with really large size. Means the bucket entries are more and so every saving in heap size overhead is welcome. Here by we save 3 bytes per entry. So better continue with current way. I have raised another issue to check possibility of reducing sizes wherever possible. eg: such possibilities include instead of having a ref variable to an enum, keep the type as a byte. Like that.. Just saying. So we can fix this issue with 1. Proper documentation of what are possible sizes of bucket size. 2. Having a validation for the sizes when user configure them. Throw exception when any of the size is invalid (not multiple of 256) or just make those entries a correct value aligning that to be multiple of 256 (Ceil) and a proper LOG wdyt? > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, > HBASE-16993.master.005.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.
[jira] [Comment Edited] (HBASE-18016) Implement abort for TruncateTableProcedure
[ https://issues.apache.org/jira/browse/HBASE-18016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003112#comment-16003112 ] Umesh Agashe edited comment on HBASE-18016 at 5/9/17 5:29 PM: -- Thanks for your comments [~stack]! I have created a separate JIRA HBASE-18018 to change the default behavior for supporting abort of all procedures even if rollback is not supported/ implemented. Once we change the default behavior, we can change TruncateTableProcedure to fallback on default behavior. was (Author: uagashe): Thanks for your comments @stack! I have created a separate JIRA HBASE-18018 to change the default behavior for supporting abort of all procedures even if rollback is not supported/ implemented. Once we change the default behavior, we can change TruncateTableProcedure to fallback on default behavior. > Implement abort for TruncateTableProcedure > -- > > Key: HBASE-18016 > URL: https://issues.apache.org/jira/browse/HBASE-18016 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Fix For: 2.0.0 > > > TruncateTableProcedure can not be aborted as abort is not implemented. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-18016) Implement abort for TruncateTableProcedure
[ https://issues.apache.org/jira/browse/HBASE-18016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003112#comment-16003112 ] Umesh Agashe commented on HBASE-18016: -- Thanks for your comments @stack! I have created a separate JIRA HBASE-18018 to change the default behavior for supporting abort of all procedures even if rollback is not supported/ implemented. Once we change the default behavior, we can change TruncateTableProcedure to fallback on default behavior. > Implement abort for TruncateTableProcedure > -- > > Key: HBASE-18016 > URL: https://issues.apache.org/jira/browse/HBASE-18016 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Reporter: Umesh Agashe >Assignee: Umesh Agashe > Fix For: 2.0.0 > > > TruncateTableProcedure can not be aborted as abort is not implemented. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-18018) Support abort for all procedures by default
Umesh Agashe created HBASE-18018: Summary: Support abort for all procedures by default Key: HBASE-18018 URL: https://issues.apache.org/jira/browse/HBASE-18018 Project: HBase Issue Type: Sub-task Components: proc-v2 Reporter: Umesh Agashe Assignee: Umesh Agashe Changes the default behavior of StateMachineProcedure to support aborting all procedures even if rollback is not supported. On abort, procedure is treated as failed and rollback is called but for procedures which cannot be rolled back abort is ignored currently. This sometime causes procedure to get stuck in waiting state forever. User should have an option to abort any stuck procedure and clean up manually. Please refer to HBASE-18016 and discussion there. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17928) Shell tool to clear compact queues
[ https://issues.apache.org/jira/browse/HBASE-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17928: --- Attachment: 17928-v5.patch Re-attaching patch v5 which didn't go thru QA. > Shell tool to clear compact queues > -- > > Key: HBASE-17928 > URL: https://issues.apache.org/jira/browse/HBASE-17928 > Project: HBase > Issue Type: New Feature > Components: Compaction, Operability >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng > Fix For: 2.0.0 > > Attachments: 17928-v5.patch, HBASE-17928-branch-1-v1.patch, > HBASE-17928-branch-1-v2.patch, HBASE-17928-v1.patch, HBASE-17928-v2.patch, > HBASE-17928-v3.patch, HBASE-17928-v4.patch, HBASE-17928-v5.patch > > > scenarioļ¼ > 1. Compact a table by mistake > 2. Compact is not completed within the specified time period > In this case, clearing the queue is a better choice, so as not to affect the > stability of the cluster -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002981#comment-16002981 ] Ted Yu commented on HBASE-11013: Integrated to master branch. Thanks for the patch, Zheng. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-11013: --- Hadoop Flags: Reviewed Fix Version/s: 2.0.0 > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002948#comment-16002948 ] Zheng Hu commented on HBASE-11013: -- Attached release note. Any concerns ? Thanks. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-11013: - Release Note: While creating a snapshot, it will save permissions of the original table into an additional file named .aclinfo , which is in the snapshot root directory. For clone_snapshot command, we provide an additional option( RESTORE_ACL) to decide whether we will grant permissons of the origin table to the newly created table. > Clone Snapshots on Secure Cluster Should provide option to apply Retained > User Permissions > -- > > Key: HBASE-11013 > URL: https://issues.apache.org/jira/browse/HBASE-11013 > Project: HBase > Issue Type: Improvement > Components: snapshots >Reporter: Ted Yu >Assignee: Zheng Hu > Attachments: HBASE-11013.v1.patch, HBASE-11013.v2.patch > > > Currently, > {code} > sudo su - test_user > create 't1', 'f1' > sudo su - hbase > snapshot 't1', 'snap_one' > clone_snapshot 'snap_one', 't2' > {code} > In this scenario the user - test_user would not have permissions for the > clone table t2. > We need to add improvement feature such that the permissions of the original > table are recorded in snapshot metadata and an option is provided for > applying them to the new table as part of the clone process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (HBASE-17999) Pyspark HBase Connector
[ https://issues.apache.org/jira/browse/HBASE-17999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser resolved HBASE-17999. Resolution: Invalid Please ask questions on the user mailing list: u...@hbase.apache.org > Pyspark HBase Connector > --- > > Key: HBASE-17999 > URL: https://issues.apache.org/jira/browse/HBASE-17999 > Project: HBase > Issue Type: Brainstorming > Components: API >Affects Versions: 1.2.4 > Environment: Centos7, Python >Reporter: Waqar Muhammad > > Is there a way/connector to connect hbase from pyspark and perform queries? > Is there any official documentation for that? Would be awsome if someone > could point me in the right direction > Thanks in advance -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-11013) Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
[ https://issues.apache.org/jira/browse/HBASE-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002864#comment-16002864 ] Hadoop QA commented on HBASE-11013: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 1s {color} | {color:blue} rubocop was not available. {color} | | {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 1s {color} | {color:blue} Ruby-lint was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 26s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 15m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 51s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 14m 57s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 57s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 14s {color} | {color:green} hbase-shell in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 11s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 206m 51s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.client.TestAsyncNonMetaRegionLocator | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:y
[jira] [Commented] (HBASE-18012) Move RpcServer.Connection to a separated file
[ https://issues.apache.org/jira/browse/HBASE-18012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002860#comment-16002860 ] Hadoop QA commented on HBASE-18012: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 50s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 41s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s {color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 42s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s {color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 131m 22s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | org.apache.hadoop.hbase.ipc.SimpleServerRpcConnection.readAndProcess() does not release lock on all exception paths At SimpleServerRpcConnection.java:on all exception paths At SimpleServerRpcConnection.java:[line 269] | | Timed out junit tests | org.apache.hadoop.hbase.security.access.TestScanEarlyTermination | | | org.apache.hadoop.hbase.security.access.TestCoprocessorWhitelistMasterObserver | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867103/HBASE-18012.patch | | JIRA Issue | HBASE-18012 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 56c8f27274a6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/d
[jira] [Commented] (HBASE-16356) REST API scanner: row prefix filter and custom filter parameters are mutually exclusive
[ https://issues.apache.org/jira/browse/HBASE-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002828#comment-16002828 ] Ted Yu commented on HBASE-16356: The change in TestTableScan.java doesn't apply to branch-1. Mind attaching a patch for branch-1 ? > REST API scanner: row prefix filter and custom filter parameters are mutually > exclusive > --- > > Key: HBASE-16356 > URL: https://issues.apache.org/jira/browse/HBASE-16356 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.1.2 > Environment: Not environment specific (tested on HDP 2.4.2) >Reporter: Bjorn Olsen >Assignee: Ben Watson >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: TableResource-HBASE-16356.patch > > > A user can optionally specify a row PrefixFilter, or a list of custom > filters, to the REST API scanner. > Prefix filter example: > /123*?startrow=0&endrow=9 > Custom filters example: > /*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > This works when specified separately, like above. > However, specifying both a prefix filter and a list of custom filters causes > the API to ignore the prefix filter. > Example using both parameters: > /123*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > It appears that code in the TableResource.getScanResource function is causing > this issue as follows: > (see > https://hbase.apache.org/devapidocs/src-html/org/apache/hadoop/hbase/rest/TableResource.html#line.196 > ) > if (filterList != null) { > tableScan.setFilter(filterList); /*comes from custom filters parameter*/ > } else if (filter != null) { > tableScan.setFilter(filter); > /*comes from row prefix parameter*/ > } > This should probably be changed to use a single filterList for both > parameters. The prefix filter can be "Popped" onto the filter list and then > these parameters will work even when both are specified. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-16356) REST API scanner: row prefix filter and custom filter parameters are mutually exclusive
[ https://issues.apache.org/jira/browse/HBASE-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16356: --- Summary: REST API scanner: row prefix filter and custom filter parameters are mutually exclusive (was: REST API scanner: row prefix filter and custom filter parameter are mutually exclusive) > REST API scanner: row prefix filter and custom filter parameters are mutually > exclusive > --- > > Key: HBASE-16356 > URL: https://issues.apache.org/jira/browse/HBASE-16356 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.1.2 > Environment: Not environment specific (tested on HDP 2.4.2) >Reporter: Bjorn Olsen >Assignee: Ben Watson >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: TableResource-HBASE-16356.patch > > > A user can optionally specify a row PrefixFilter, or a list of custom > filters, to the REST API scanner. > Prefix filter example: > /123*?startrow=0&endrow=9 > Custom filters example: > /*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > This works when specified separately, like above. > However, specifying both a prefix filter and a list of custom filters causes > the API to ignore the prefix filter. > Example using both parameters: > /123*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > It appears that code in the TableResource.getScanResource function is causing > this issue as follows: > (see > https://hbase.apache.org/devapidocs/src-html/org/apache/hadoop/hbase/rest/TableResource.html#line.196 > ) > if (filterList != null) { > tableScan.setFilter(filterList); /*comes from custom filters parameter*/ > } else if (filter != null) { > tableScan.setFilter(filter); > /*comes from row prefix parameter*/ > } > This should probably be changed to use a single filterList for both > parameters. The prefix filter can be "Popped" onto the filter list and then > these parameters will work even when both are specified. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-16356) REST API scanner: row prefix filter and custom filter parameter are mutually exclusive
[ https://issues.apache.org/jira/browse/HBASE-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-16356: --- Hadoop Flags: Reviewed Fix Version/s: 1.4.0 2.0.0 > REST API scanner: row prefix filter and custom filter parameter are mutually > exclusive > -- > > Key: HBASE-16356 > URL: https://issues.apache.org/jira/browse/HBASE-16356 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.1.2 > Environment: Not environment specific (tested on HDP 2.4.2) >Reporter: Bjorn Olsen >Assignee: Ben Watson >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: TableResource-HBASE-16356.patch > > > A user can optionally specify a row PrefixFilter, or a list of custom > filters, to the REST API scanner. > Prefix filter example: > /123*?startrow=0&endrow=9 > Custom filters example: > /*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > This works when specified separately, like above. > However, specifying both a prefix filter and a list of custom filters causes > the API to ignore the prefix filter. > Example using both parameters: > /123*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > It appears that code in the TableResource.getScanResource function is causing > this issue as follows: > (see > https://hbase.apache.org/devapidocs/src-html/org/apache/hadoop/hbase/rest/TableResource.html#line.196 > ) > if (filterList != null) { > tableScan.setFilter(filterList); /*comes from custom filters parameter*/ > } else if (filter != null) { > tableScan.setFilter(filter); > /*comes from row prefix parameter*/ > } > This should probably be changed to use a single filterList for both > parameters. The prefix filter can be "Popped" onto the filter list and then > these parameters will work even when both are specified. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16356) REST API scanner: row prefix filter and custom filter parameter are mutually exclusive
[ https://issues.apache.org/jira/browse/HBASE-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002806#comment-16002806 ] Ben Watson commented on HBASE-16356: Is there a process I need to follow to get this progressed? > REST API scanner: row prefix filter and custom filter parameter are mutually > exclusive > -- > > Key: HBASE-16356 > URL: https://issues.apache.org/jira/browse/HBASE-16356 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 1.1.2 > Environment: Not environment specific (tested on HDP 2.4.2) >Reporter: Bjorn Olsen >Assignee: Ben Watson >Priority: Minor > Attachments: TableResource-HBASE-16356.patch > > > A user can optionally specify a row PrefixFilter, or a list of custom > filters, to the REST API scanner. > Prefix filter example: > /123*?startrow=0&endrow=9 > Custom filters example: > /*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > This works when specified separately, like above. > However, specifying both a prefix filter and a list of custom filters causes > the API to ignore the prefix filter. > Example using both parameters: > /123*?startrow=0&endrow=9&filter=RowFilter(=,'substring:456) > It appears that code in the TableResource.getScanResource function is causing > this issue as follows: > (see > https://hbase.apache.org/devapidocs/src-html/org/apache/hadoop/hbase/rest/TableResource.html#line.196 > ) > if (filterList != null) { > tableScan.setFilter(filterList); /*comes from custom filters parameter*/ > } else if (filter != null) { > tableScan.setFilter(filter); > /*comes from row prefix parameter*/ > } > This should probably be changed to use a single filterList for both > parameters. The prefix filter can be "Popped" onto the filter list and then > these parameters will work even when both are specified. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16814) FuzzyRowFilter causes remote call timeout
[ https://issues.apache.org/jira/browse/HBASE-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002781#comment-16002781 ] Hadi Kahraman commented on HBASE-16814: --- Thanks. > FuzzyRowFilter causes remote call timeout > - > > Key: HBASE-16814 > URL: https://issues.apache.org/jira/browse/HBASE-16814 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 1.2.2, 1.2.3 > Environment: LinuxMint 17.3 (=Ubuntu 14.04), Java 1.8 >Reporter: Hadi Kahraman > > FuzzyRowFilter causes ResultScanner.next hang and timeout. The same code > works well on hbase 1.2.1, 1.2.0, 1.1.4. > hbase server: cloudera 5.7.0 (hbase 1.2.0) on 4 hosts, 1 master, 3 workers -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-15199) Move jruby jar so only on hbase-shell module classpath; currently globally available
[ https://issues.apache.org/jira/browse/HBASE-15199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002770#comment-16002770 ] Sean Busbey commented on HBASE-15199: - [~water] nothing else needed, thanks for the fast turn around. I got a bit bogged down in other things, but I'll push this addendum later today. > Move jruby jar so only on hbase-shell module classpath; currently globally > available > > > Key: HBASE-15199 > URL: https://issues.apache.org/jira/browse/HBASE-15199 > Project: HBase > Issue Type: Task > Components: dependencies, jruby, shell >Reporter: stack >Assignee: Xiang Li >Priority: Critical > Fix For: 2.0.0 > > Attachments: 15199.txt, HBASE-15199-addendum.master.000.patch, > HBASE-15199.master.001.patch, HBASE-15199.master.002.patch, > HBASE-15199.master.003.patch > > > A suggestion that came up out of internal issue (filed by Mr Jan Van Besien) > was to move the scope of the jruby include down so it is only a dependency > for the hbase-shell. jruby jar brings in a bunch of dependencies (joda time > for example) which can clash with the includes of others. Our Sean suggests > that could be good to shut down exploit possibilities if jruby was not > globally available. Only downside I can think is that it may no longer be > available to our bin/*rb scripts if we move the jar but perhaps these can be > changed so they can find the ruby jar in new location. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17887) TestAcidGuarantees fails frequently
[ https://issues.apache.org/jira/browse/HBASE-17887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002760#comment-16002760 ] Chia-Ping Tsai commented on HBASE-17887: bq. should we really have the currentScanners? Yes, we have the currentScanners. bq. If so then the point above of closing it does not apply. I will fix it in next patch. bq. Because in the patch that you have given SCannerTicket has to be exposed to Store.java as it is LimitedPRivate and Coprocs may need to know about that. Instead passing a list of scanners may be much simpler and easy to comprehend? You are right. bq. Can we just pass on list of memstoreScanners to the getScanners API in STore along with the files over which scan has to be created. Pardon me, could you tell me more details? We could get rid of ticker by passing a list of memstoreScanners on to ChangedReadersObserver. {code} /** * Notify observers. * @throws IOException e */ void updateReaders(List sfs, List memStoreScanners) throws IOException; {code} StoreScanner can only updates the file scanner in resetScannerStack. Thanks for your suggestion. [~ram_krish] > TestAcidGuarantees fails frequently > --- > > Key: HBASE-17887 > URL: https://issues.apache.org/jira/browse/HBASE-17887 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Umesh Agashe >Assignee: Chia-Ping Tsai >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.4.1 > > Attachments: HBASE-17887.branch-1.v0.patch, > HBASE-17887.branch-1.v1.patch, HBASE-17887.branch-1.v1.patch, > HBASE-17887.branch-1.v2.patch, HBASE-17887.branch-1.v2.patch, > HBASE-17887.branch-1.v3.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.branch-1.v4.patch, HBASE-17887.branch-1.v4.patch, > HBASE-17887.ut.patch, HBASE-17887.v0.patch, HBASE-17887.v1.patch, > HBASE-17887.v2.patch, HBASE-17887.v3.patch > > > As per the flaky tests dashboard here: > https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html, > It fails 30% of the time. > While working on HBASE-17863, a few verification builds on patch failed due > to TestAcidGuarantees didn't pass. IMHO, the changes for HBASE-17863 are > unlikely to affect get/ put path. > I ran the test with and without the patch several times locally and found > that TestAcidGuarantees fails without the patch similar number of times. > Opening blocker, considering acid guarantees are critical to HBase. -- This message was sent by Atlassian JIRA (v6.3.15#6346)