[jira] [Commented] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-10 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608808#comment-16608808
 ] 

ramkrishna.s.vasudevan commented on HBASE-21102:


[~huaxiang] and [~brfrn169]
Thanks for the reviews and your time. Looking at them now. Will update patch 
shortly and commit it to respective branches. 

> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-10 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608809#comment-16608809
 ] 

Duo Zhang commented on HBASE-21052:
---

Next time please do not set explicit timeout value on test method, the timeout 
will be controlled from the framework. If you want to assert the time please 
use waitFor to better tell others that this operation is the criminal.

And does this also effect branch-2.1 and branch-2.0?

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.Contex

[jira] [Updated] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21158:
--
Attachment: HBASE-21158.master.003.patch

> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608817#comment-16608817
 ] 

Guangxu Cheng commented on HBASE-21158:
---

Attach 003 to fix some spelling mistakes. If QA does not report warnings, I 
will submit it.Thanks

> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21178) Get and Scan operation with converter_class not working

2018-09-10 Thread Subrat Mishra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subrat Mishra updated HBASE-21178:
--
Attachment: HBASE-21178.master.001.patch

> Get and Scan operation with converter_class not working
> ---
>
> Key: HBASE-21178
> URL: https://issues.apache.org/jira/browse/HBASE-21178
> Project: HBase
>  Issue Type: Bug
>Reporter: Subrat Mishra
>Assignee: Subrat Mishra
>Priority: Major
> Attachments: HBASE-21178.master.001.patch
>
>
> Consider a simple scenario:
> {code:java}
> create 'foo', {NAME => 'f1'}
> put 'foo','r1','f1:a',1000
> get 'foo','r1',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']} 
> scan 'foo',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']}{code}
> Both get and scan fails with ERROR
> {code:java}
> ERROR: wrong number of arguments (3 for 1) {code}
> Looks like in table.rb file converter_method expects 3 arguments [(bytes, 
> offset, len)] since version 2.0.0, prior to version 2.0.0 it was taking only 
> 1 argument [(bytes)]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-10 Thread ramkrishna.s.vasudevan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-21102:
---
Attachment: HBASE-21102_4.patch

> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0, 2.2.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_4.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-10 Thread ramkrishna.s.vasudevan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-21102:
---
Affects Version/s: 2.2.0
   Status: Open  (was: Patch Available)

> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0, 2.2.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_4.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-10 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21143:
--
Fix Version/s: 2.0.3
   2.1.1

> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-10 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-21143:
---

> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-10 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-21143.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Cherry-picked to branch-2.1 & branch-2.0.

> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21075) Confirm that we can (rolling) upgrade from 2.0.x and 2.1.x to 2.2.x after HBASE-20881

2018-09-10 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608832#comment-16608832
 ] 

Duo Zhang commented on HBASE-21075:
---

Any progress here boss? [~stack]. I think it is time to roll a 2.1.1?

> Confirm that we can (rolling) upgrade from 2.0.x and 2.1.x to 2.2.x after 
> HBASE-20881
> -
>
> Key: HBASE-21075
> URL: https://issues.apache.org/jira/browse/HBASE-21075
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.1.1, 2.0.3
>
> Attachments: HBASE-21075.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21178) Get and Scan operation with converter_class not working

2018-09-10 Thread Subrat Mishra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subrat Mishra updated HBASE-21178:
--
Attachment: HBASE-21178.master.001.patch
Status: Patch Available  (was: Open)

I have attached the patch for master branch.

> Get and Scan operation with converter_class not working
> ---
>
> Key: HBASE-21178
> URL: https://issues.apache.org/jira/browse/HBASE-21178
> Project: HBase
>  Issue Type: Bug
>Reporter: Subrat Mishra
>Assignee: Subrat Mishra
>Priority: Major
> Attachments: HBASE-21178.master.001.patch
>
>
> Consider a simple scenario:
> {code:java}
> create 'foo', {NAME => 'f1'}
> put 'foo','r1','f1:a',1000
> get 'foo','r1',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']} 
> scan 'foo',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']}{code}
> Both get and scan fails with ERROR
> {code:java}
> ERROR: wrong number of arguments (3 for 1) {code}
> Looks like in table.rb file converter_method expects 3 arguments [(bytes, 
> offset, len)] since version 2.0.0, prior to version 2.0.0 it was taking only 
> 1 argument [(bytes)]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21178) Get and Scan operation with converter_class not working

2018-09-10 Thread Subrat Mishra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subrat Mishra updated HBASE-21178:
--
Attachment: (was: HBASE-21178.master.001.patch)

> Get and Scan operation with converter_class not working
> ---
>
> Key: HBASE-21178
> URL: https://issues.apache.org/jira/browse/HBASE-21178
> Project: HBase
>  Issue Type: Bug
>Reporter: Subrat Mishra
>Assignee: Subrat Mishra
>Priority: Major
> Attachments: HBASE-21178.master.001.patch
>
>
> Consider a simple scenario:
> {code:java}
> create 'foo', {NAME => 'f1'}
> put 'foo','r1','f1:a',1000
> get 'foo','r1',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']} 
> scan 'foo',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']}{code}
> Both get and scan fails with ERROR
> {code:java}
> ERROR: wrong number of arguments (3 for 1) {code}
> Looks like in table.rb file converter_method expects 3 arguments [(bytes, 
> offset, len)] since version 2.0.0, prior to version 2.0.0 it was taking only 
> 1 argument [(bytes)]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-10 Thread ramkrishna.s.vasudevan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608838#comment-16608838
 ] 

ramkrishna.s.vasudevan commented on HBASE-21102:


Pushed to master and branch-2. Thanks for the reviews. 
[~Apache9]
Let me know if you want this fix in 2.1. This is a bug fix only. 

> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0, 2.2.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_4.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21177) Add per-table metrics on getTime,putTime and scanTime

2018-09-10 Thread xijiawen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xijiawen updated HBASE-21177:
-
Attachment: HBASE-21177.patch

> Add per-table metrics on getTime,putTime and scanTime
> -
>
> Key: HBASE-21177
> URL: https://issues.apache.org/jira/browse/HBASE-21177
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.0.2
>Reporter: xijiawen
>Priority: Major
> Fix For: HBASE-14850
>
> Attachments: HBASE-21177.patch
>
>
> Adds getTime,putTime,SscanTime to the per-table mertrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21177) Add per-table metrics on getTime,putTime and scanTime

2018-09-10 Thread xijiawen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xijiawen updated HBASE-21177:
-
Description: Adds getTime,putTime,scanTime to the per-table mertrics.  
(was: Adds getTime,putTime,SscanTime to the per-table mertrics.)

> Add per-table metrics on getTime,putTime and scanTime
> -
>
> Key: HBASE-21177
> URL: https://issues.apache.org/jira/browse/HBASE-21177
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.0.2
>Reporter: xijiawen
>Priority: Major
> Fix For: HBASE-14850
>
> Attachments: HBASE-21177.patch
>
>
> Adds getTime,putTime,scanTime to the per-table mertrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-10 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608848#comment-16608848
 ] 

Duo Zhang commented on HBASE-21102:
---

Yes I think this is a bug fix. +1 on committing to branch-2.1.

> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0, 2.2.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_4.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21177) Add per-table metrics on getTime,putTime and scanTime

2018-09-10 Thread xijiawen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xijiawen updated HBASE-21177:
-
Issue Type: Task  (was: New Feature)

> Add per-table metrics on getTime,putTime and scanTime
> -
>
> Key: HBASE-21177
> URL: https://issues.apache.org/jira/browse/HBASE-21177
> Project: HBase
>  Issue Type: Task
>  Components: metrics
>Affects Versions: 2.0.2
>Reporter: xijiawen
>Priority: Major
> Fix For: HBASE-14850
>
> Attachments: HBASE-21177.patch
>
>
> Adds getTime,putTime,scanTime to the per-table mertrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608856#comment-16608856
 ] 

Hadoop QA commented on HBASE-21144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} hbase-server: The patch generated 0 new + 406 
unchanged - 7 fixed = 406 total (was 413) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
23s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}188m 25s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-21144 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939015/HBASE-21144-branch-2.1.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6f68fd2bd60a 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.1 / f755ded2d2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14372/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1437

[jira] [Commented] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608876#comment-16608876
 ] 

Hadoop QA commented on HBASE-21172:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} hbase-procedure: The patch generated 0 new + 6 
unchanged - 2 fixed = 6 total (was 8) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} The patch hbase-server passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hbase-procedure generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}196m 30s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}248m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | hadoop.hbase.client.TestMobRestoreSnapshotFromClient |
|   | hadoop.hbase.client.TestFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939017/HBASE-21172-v3.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
had

[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608879#comment-16608879
 ] 

Hadoop QA commented on HBASE-21158:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 45s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.filter.TestQualifierFilterWithEmptyQualifier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939032/HBASE-21158.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 12de2ba180f4 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b83613fdce |
| maven | version: Apache Maven

[jira] [Commented] (HBASE-21178) Get and Scan operation with converter_class not working

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608884#comment-16608884
 ] 

Hadoop QA commented on HBASE-21178:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
5s{color} | {color:red} The patch generated 4 new + 150 unchanged - 1 fixed = 
154 total (was 151) {color} |
| {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green}  0m  
2s{color} | {color:green} There were no new ruby-lint issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
30s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21178 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939037/HBASE-21178.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux f03b4785d935 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b09dbb443e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| rubocop | v0.58.2 |
| rubocop | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14374/artifact/patchprocess/diff-patch-rubocop.txt
 |
| ruby-lint | v2.3.1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14374/testReport/ |
| Max. process+thread count | 2172 (vs. ulimit of 1) |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14374/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Get and Scan operation with converter_class not working
> ---
>
> Key: HBASE-21178
> URL: https://issues.apache.org/jira/browse/HBASE-21178
> Project: HBase
>  Issue Type: Bug
>Reporter: Subrat Mishra
>Assignee: Subrat Mishra
>Priority: Major
> Attachments: HBASE-21178.master.001.patch
>
>
> Consider a simple scenario:
> {code:java}
> create 'foo', {NAME => 'f1'}
> put 'foo','r1','f1:a',1000
> get 'foo','r1',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']} 
> scan 'foo',{COLUMNS => 
> ['f1:a:c(org.apache.hadoop.hbase.util.Bytes).len']}{code}
> Both get and scan fails with ERROR
> {code:java}
> ERROR: wrong number of arguments (3 for 1) {code}
> Looks like in table.rb file converter_method expects 3 arguments [(bytes, 
> offset, len)] since version 2.0.0, prior to version 2.0.0 it was taking only 
> 1 argument [(bytes)]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-10 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608927#comment-16608927
 ] 

Duo Zhang commented on HBASE-21144:
---

The failed UT is a known issue and I think I've already opened a issue for it. 
Let me commit.

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, 
> HBASE-21144-branch-2.1.patch, HBASE-21144-v1.patch, HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21158:
--
Attachment: HBASE-21158.master.004.patch

> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch, 
> HBASE-21158.master.004.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608937#comment-16608937
 ] 

Hudson commented on HBASE-21144:


Results for branch branch-2
[build #1227 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1227/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1227//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1227//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1227//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, 
> HBASE-21144-branch-2.1.patch, HBASE-21144-v1.patch, HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-10 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21144:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-2.0+. Thanks [~stack] for reviewing.

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, 
> HBASE-21144-branch-2.1.patch, HBASE-21144-v1.patch, HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19146) Hbase3.0 protobuf-maven-plugin do not support Arm64(only for x86)

2018-09-10 Thread Yuqi Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608945#comment-16608945
 ] 

Yuqi Gu commented on HBASE-19146:
-

Hi [~stack],

Thanks for your comments.!
Protoc aarch64 binaries have been published since version 3.5.0.
>From protobuf side, it seems that there are no wire compatibility issues with 
>upgrading  protobuf to a newer version.
So Google guys are reluctant to go back and do a new minor release on the 
version 2.5.0 that old.  Issue 
[#5115|https://github.com/protocolbuffers/protobuf/issues/5115]


If upgrading to new protobuf from 2.5.0 in Hbase, are there many efforts to 
make the compatibility between hbase1.x client and hbase2 cluster ?
Thanks!



> Hbase3.0  protobuf-maven-plugin do not support Arm64(only for x86)
> --
>
> Key: HBASE-19146
> URL: https://issues.apache.org/jira/browse/HBASE-19146
> Project: HBase
>  Issue Type: Bug
>  Components: build, pom
>Affects Versions: 3.0.0
> Environment: OS:  Ubuntu 16.04.3 
> OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.16.04.3-b11)
> Hw platform:  AARCH64
>Reporter: Yuqi Gu
>Priority: Major
>
> We are building the HBase 3.0.0-SNAPSHOT on AARCH64.
> It is noted that 'protobuf-maven-plugin' only support x86 shown as follows:
> {code:java}
>  
>org.xolstice.maven.plugins
>protobuf-maven-plugin
>${protobuf.plugin.version}
>
>   com.google.protobuf:protoc:${external.protobuf.version}:
> exe:${os.detected.classifier}
> 
> com.google.protobuf:protoc:${external.protobuf.version}:exe:${os.detected.classifier}
> 
>false
>true
>   
> 
> {code}
> So the build is failed.
> {code:java}
> [INFO] --- protobuf-maven-plugin:0.5.0:compile (compile-protoc) @ 
> hbase-protocol-shaded ---
> [INFO] Compiling 32 proto file(s) to 
> /root/hbase/hbase-protocol-shaded/target/generated-sources/protobuf/java
> Failed to execute goal 
> org.xolstice.maven.plugins:protobuf-maven-plugin:0.5.0:compile 
> (compile-protoc) on project hbase-protocol-shaded: Missing:
> {code}
> Then I installed aarch64 protobuf 2.5.0 on the host and modify the pom:
> {code:java}
> -   
> com.google.protobuf:protoc:${external.protobuf.version}:exe:${os.detected.classifier}
> +  /usr/local/bin/protoc
> {code}
>  The build is also failed:
> {code:java}
> [INFO] Compiling 32 proto file(s) to 
> /root/hbase/hbase-protocol-shaded/target/generated-sources/protobuf/java
> [ERROR] PROTOC FAILED: google/protobuf/any.proto:31:10: Unrecognized syntax 
> identifier "proto3".  This parser only recognizes "proto2".
> {code}
> It seems that "internal.protobuf.version" in "hbase-protocol-shaded" is 3.3.0.
> How to fix it? Thanks!
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19146) Hbase3.0 protobuf-maven-plugin do not support Arm64(only for x86)

2018-09-10 Thread Yuqi Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608945#comment-16608945
 ] 

Yuqi Gu edited comment on HBASE-19146 at 9/10/18 9:56 AM:
--

Hi [~stack],

Thanks for your comments.!
Protoc aarch64 binaries have been published since version 3.5.0.
>From protobuf side, it seems that there are no wire compatibility issues with 
>upgrading  protobuf to a newer version.
So Google guys are reluctant to go back and do a new minor release on the 
version 2.5.0 that old.  See the issue 
[#5115|https://github.com/protocolbuffers/protobuf/issues/5115]


If upgrading to new protobuf from 2.5.0 in Hbase, are there many efforts to 
make the compatibility between hbase1.x client and hbase2 cluster ?
Thanks!




was (Author: yqgu):
Hi [~stack],

Thanks for your comments.!
Protoc aarch64 binaries have been published since version 3.5.0.
>From protobuf side, it seems that there are no wire compatibility issues with 
>upgrading  protobuf to a newer version.
So Google guys are reluctant to go back and do a new minor release on the 
version 2.5.0 that old.  Issue 
[#5115|https://github.com/protocolbuffers/protobuf/issues/5115]


If upgrading to new protobuf from 2.5.0 in Hbase, are there many efforts to 
make the compatibility between hbase1.x client and hbase2 cluster ?
Thanks!



> Hbase3.0  protobuf-maven-plugin do not support Arm64(only for x86)
> --
>
> Key: HBASE-19146
> URL: https://issues.apache.org/jira/browse/HBASE-19146
> Project: HBase
>  Issue Type: Bug
>  Components: build, pom
>Affects Versions: 3.0.0
> Environment: OS:  Ubuntu 16.04.3 
> OpenJDK Runtime Environment (build 1.8.0_131-8u131-b11-2ubuntu1.16.04.3-b11)
> Hw platform:  AARCH64
>Reporter: Yuqi Gu
>Priority: Major
>
> We are building the HBase 3.0.0-SNAPSHOT on AARCH64.
> It is noted that 'protobuf-maven-plugin' only support x86 shown as follows:
> {code:java}
>  
>org.xolstice.maven.plugins
>protobuf-maven-plugin
>${protobuf.plugin.version}
>
>   com.google.protobuf:protoc:${external.protobuf.version}:
> exe:${os.detected.classifier}
> 
> com.google.protobuf:protoc:${external.protobuf.version}:exe:${os.detected.classifier}
> 
>false
>true
>   
> 
> {code}
> So the build is failed.
> {code:java}
> [INFO] --- protobuf-maven-plugin:0.5.0:compile (compile-protoc) @ 
> hbase-protocol-shaded ---
> [INFO] Compiling 32 proto file(s) to 
> /root/hbase/hbase-protocol-shaded/target/generated-sources/protobuf/java
> Failed to execute goal 
> org.xolstice.maven.plugins:protobuf-maven-plugin:0.5.0:compile 
> (compile-protoc) on project hbase-protocol-shaded: Missing:
> {code}
> Then I installed aarch64 protobuf 2.5.0 on the host and modify the pom:
> {code:java}
> -   
> com.google.protobuf:protoc:${external.protobuf.version}:exe:${os.detected.classifier}
> +  /usr/local/bin/protoc
> {code}
>  The build is also failed:
> {code:java}
> [INFO] Compiling 32 proto file(s) to 
> /root/hbase/hbase-protocol-shaded/target/generated-sources/protobuf/java
> [ERROR] PROTOC FAILED: google/protobuf/any.proto:31:10: Unrecognized syntax 
> identifier "proto3".  This parser only recognizes "proto2".
> {code}
> It seems that "internal.protobuf.version" in "hbase-protocol-shaded" is 3.3.0.
> How to fix it? Thanks!
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-10 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21172:
--
Attachment: HBASE-21172-v4.patch

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21172-v1.patch, HBASE-21172-v2.patch, 
> HBASE-21172-v3.patch, HBASE-21172-v4.patch, HBASE-21172.patch
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-10 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608963#comment-16608963
 ] 

Guangxu Cheng commented on HBASE-21143:
---

[~Apache9] Thanks for your commit.:)

> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20857) JMX - add Balancer status = enabled / disabled

2018-09-10 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated HBASE-20857:
---
Status: Patch Available  (was: Open)

> JMX - add Balancer status = enabled / disabled
> --
>
> Key: HBASE-20857
> URL: https://issues.apache.org/jira/browse/HBASE-20857
> Project: HBase
>  Issue Type: Improvement
>  Components: API, master, metrics, REST, tooling, Usability
>Reporter: Hari Sekhon
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: HBASE-20857.branch-1.4.001.patch
>
>
> Add HBase Balancer enabled/disabled status to JMX API on HMaster.
> Right now the HMaster will give a warning near the top of HMaster UI if 
> balancer is disabled, but scraping this is for monitoring integration is not 
> nice, it should be available in JMX API as there is already a 
> Master,sub=Balancer bean with metrics for the balancer ops etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21179) Fix the number of actions in responseTooSlow log

2018-09-10 Thread Guangxu Cheng (JIRA)
Guangxu Cheng created HBASE-21179:
-

 Summary: Fix the number of actions in responseTooSlow log
 Key: HBASE-21179
 URL: https://issues.apache.org/jira/browse/HBASE-21179
 Project: HBase
  Issue Type: Bug
  Components: rpc
Reporter: Guangxu Cheng
Assignee: Guangxu Cheng


{panel:title=responseTooSlow|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
2018-09-10 16:13:53,022 WARN  
[B.DefaultRpcServer.handler=209,queue=29,port=60020] ipc.RpcServer: 
(responseTooSlow): 
{"processingtimems":321262,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"127.0.0.1:56149","param":"region=
 
tsdb,\\x00\\x00.[\\x89\\x1F\\xB0\\x00\\x00\\x01\\x00\\x01Y\\x00\\x00\\x02\\x00\\x00\\x04,1536133210446.7c752de470bd5558a001117b123a5db5.,
 {color:red}for 1 actions and 1st row{color} 
key=\\x00\\x00.[\\x96\\x16p","starttimems":1536566911759,"queuetimems":0,"class":"HRegionServer","responsesize":2,"method":"Multi"}
{panel}

The responseTooSlow log is printed when the processing time of a request 
exceeds the specified threshold. The number of actions and the contents of the 
first rowkey in the request will be included in the log.
However, the number of actions is inaccurate, and it is actually the number of 
regions that the request needs to visit.
Just like the logs above, users may be mistaken for using 321262ms to process 
an action, which is incredible, so we need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21179) Fix the number of actions in responseTooSlow log

2018-09-10 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21179:
--
Attachment: HBASE-21179.master.001.patch

> Fix the number of actions in responseTooSlow log
> 
>
> Key: HBASE-21179
> URL: https://issues.apache.org/jira/browse/HBASE-21179
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21179.master.001.patch
>
>
> {panel:title=responseTooSlow|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> 2018-09-10 16:13:53,022 WARN  
> [B.DefaultRpcServer.handler=209,queue=29,port=60020] ipc.RpcServer: 
> (responseTooSlow): 
> {"processingtimems":321262,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"127.0.0.1:56149","param":"region=
>  
> tsdb,\\x00\\x00.[\\x89\\x1F\\xB0\\x00\\x00\\x01\\x00\\x01Y\\x00\\x00\\x02\\x00\\x00\\x04,1536133210446.7c752de470bd5558a001117b123a5db5.,
>  {color:red}for 1 actions and 1st row{color} 
> key=\\x00\\x00.[\\x96\\x16p","starttimems":1536566911759,"queuetimems":0,"class":"HRegionServer","responsesize":2,"method":"Multi"}
> {panel}
> The responseTooSlow log is printed when the processing time of a request 
> exceeds the specified threshold. The number of actions and the contents of 
> the first rowkey in the request will be included in the log.
> However, the number of actions is inaccurate, and it is actually the number 
> of regions that the request needs to visit.
> Just like the logs above, users may be mistaken for using 321262ms to process 
> an action, which is incredible, so we need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21179) Fix the number of actions in responseTooSlow log

2018-09-10 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21179:
--
Status: Patch Available  (was: Open)

a simple patch.

> Fix the number of actions in responseTooSlow log
> 
>
> Key: HBASE-21179
> URL: https://issues.apache.org/jira/browse/HBASE-21179
> Project: HBase
>  Issue Type: Bug
>  Components: rpc
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21179.master.001.patch
>
>
> {panel:title=responseTooSlow|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> 2018-09-10 16:13:53,022 WARN  
> [B.DefaultRpcServer.handler=209,queue=29,port=60020] ipc.RpcServer: 
> (responseTooSlow): 
> {"processingtimems":321262,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"127.0.0.1:56149","param":"region=
>  
> tsdb,\\x00\\x00.[\\x89\\x1F\\xB0\\x00\\x00\\x01\\x00\\x01Y\\x00\\x00\\x02\\x00\\x00\\x04,1536133210446.7c752de470bd5558a001117b123a5db5.,
>  {color:red}for 1 actions and 1st row{color} 
> key=\\x00\\x00.[\\x96\\x16p","starttimems":1536566911759,"queuetimems":0,"class":"HRegionServer","responsesize":2,"method":"Multi"}
> {panel}
> The responseTooSlow log is printed when the processing time of a request 
> exceeds the specified threshold. The number of actions and the contents of 
> the first rowkey in the request will be included in the log.
> However, the number of actions is inaccurate, and it is actually the number 
> of regions that the request needs to visit.
> Just like the logs above, users may be mistaken for using 321262ms to process 
> an action, which is incredible, so we need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20857) JMX - add Balancer status = enabled / disabled

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609126#comment-16609126
 ] 

Hadoop QA commented on HBASE-20857:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
20s{color} | {color:red} hbase-server: The patch generated 2 new + 110 
unchanged - 0 fixed = 112 total (was 110) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
32s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 55s{color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.5 2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_181 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m  
6s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:gree

[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609128#comment-16609128
 ] 

Hadoop QA commented on HBASE-21158:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}136m 54s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestDisableTableProcedure |
|   | hadoop.hbase.client.TestBlockEvictionFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21158 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939049/HBASE-21158.master.004.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 52fcce332e0a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision |

[jira] [Commented] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609166#comment-16609166
 ] 

Hadoop QA commented on HBASE-21172:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hbase-procedure: The patch generated 0 new + 6 
unchanged - 2 fixed = 6 total (was 8) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} The patch hbase-server passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}136m  
2s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939051/HBASE-21172-v4.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b3a151e5fe56 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-B

[jira] [Commented] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-10 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609171#comment-16609171
 ] 

Duo Zhang commented on HBASE-21172:
---

All green. Good. Any other concerns? [~zghaobac].

Let me prepare patch for branch-2.1 and branch-2.0.

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21172-v1.patch, HBASE-21172-v2.patch, 
> HBASE-21172-v3.patch, HBASE-21172-v4.patch, HBASE-21172.patch
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21179) Fix the number of actions in responseTooSlow log

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609172#comment-16609172
 ] 

Hadoop QA commented on HBASE-21179:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
26s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21179 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939083/HBASE-21179.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 383e417b3eb0 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b09dbb443e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14378/testReport/ |
| Max. process+thread count | 267 (vs. ulimit of 1) |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-

[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609260#comment-16609260
 ] 

Guangxu Cheng commented on HBASE-21158:
---

All tests pass locally.Pushed to branch-2+. Thank [~yuzhih...@gmail.com] for 
review.

cc: [~apurtell] branch-1 and branch-1.4 need this ?

> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch, 
> HBASE-21158.master.004.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21174) [REST] Failed to parse empty qualifier in TableResource#getScanResource

2018-09-10 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21174:
--
Status: Patch Available  (was: Open)

> [REST] Failed to parse empty qualifier in TableResource#getScanResource
> ---
>
> Key: HBASE-21174
> URL: https://issues.apache.org/jira/browse/HBASE-21174
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21174.master.001.patch
>
>
> {code:xml}
> GET /t1/*?column=f:c1&column=f:
> {code}
> If I want to get the values of 'f:'(empty qualifier) for all rows in the 
> table by rest server, I will send the above request. However, this request 
> will return all column values.
> {code:java|title=TableResource#getScanResource|borderStyle=solid}
>   for (String csplit : column) {
> String[] familysplit = csplit.trim().split(":");
> if (familysplit.length == 2) {
>   if (familysplit[1].length() > 0) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Scan family and column : " + familysplit[0] + "  " + 
> familysplit[1]);
> }
> tableScan.addColumn(Bytes.toBytes(familysplit[0]), 
> Bytes.toBytes(familysplit[1]));
>   } else {
> tableScan.addFamily(Bytes.toBytes(familysplit[0]));
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Scan family : " + familysplit[0] + " and empty 
> qualifier.");
> }
> tableScan.addColumn(Bytes.toBytes(familysplit[0]), null);
>   }
> } else if (StringUtils.isNotEmpty(familysplit[0])) {
>   if (LOG.isTraceEnabled()) {
> LOG.trace("Scan family : " + familysplit[0]);
>   }
>   tableScan.addFamily(Bytes.toBytes(familysplit[0]));
> }
>   }
> {code}
> Through the above code, when the column has an empty qualifier, the empty 
> qualifier cannot be parsed correctly.In other words, 'f:'(empty qualifier) 
> and 'f' (column family) are considered to have the same meaning, which is 
> wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21174) [REST] Failed to parse empty qualifier in TableResource#getScanResource

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609309#comment-16609309
 ] 

Hadoop QA commented on HBASE-21174:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hbase-rest: The patch generated 1 new + 22 unchanged - 
0 fixed = 23 total (was 22) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938987/HBASE-21174.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 07f6541f0ec0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2aae247e3f |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14379/artifact/patchprocess/diff-checkstyle-hbase-rest.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14379/testReport/ |
| Max. process+thread count | 2403 (vs. ulimit of 1) |
| modules | C: hbase-rest U: hbase-rest |
| Console output | 
https://bui

[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609379#comment-16609379
 ] 

Ted Yu commented on HBASE-20952:


This is the google doc :

https://docs.google.com/document/d/141FDNSKHIY0DZeIWQd1Dc1QOw-3zlZxUB4Jqabch24c/edit?usp=sharing

This is the condensed version of the review request:

https://reviews.apache.org/r/68672/

The condensed version closely matches the googledoc in terms of key interfaces 
/ classes.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21174) [REST] Failed to parse empty qualifier in TableResource#getScanResource

2018-09-10 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21174:
--
Attachment: HBASE-21174.master.002.patch

> [REST] Failed to parse empty qualifier in TableResource#getScanResource
> ---
>
> Key: HBASE-21174
> URL: https://issues.apache.org/jira/browse/HBASE-21174
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21174.master.001.patch, 
> HBASE-21174.master.002.patch
>
>
> {code:xml}
> GET /t1/*?column=f:c1&column=f:
> {code}
> If I want to get the values of 'f:'(empty qualifier) for all rows in the 
> table by rest server, I will send the above request. However, this request 
> will return all column values.
> {code:java|title=TableResource#getScanResource|borderStyle=solid}
>   for (String csplit : column) {
> String[] familysplit = csplit.trim().split(":");
> if (familysplit.length == 2) {
>   if (familysplit[1].length() > 0) {
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Scan family and column : " + familysplit[0] + "  " + 
> familysplit[1]);
> }
> tableScan.addColumn(Bytes.toBytes(familysplit[0]), 
> Bytes.toBytes(familysplit[1]));
>   } else {
> tableScan.addFamily(Bytes.toBytes(familysplit[0]));
> if (LOG.isTraceEnabled()) {
>   LOG.trace("Scan family : " + familysplit[0] + " and empty 
> qualifier.");
> }
> tableScan.addColumn(Bytes.toBytes(familysplit[0]), null);
>   }
> } else if (StringUtils.isNotEmpty(familysplit[0])) {
>   if (LOG.isTraceEnabled()) {
> LOG.trace("Scan family : " + familysplit[0]);
>   }
>   tableScan.addFamily(Bytes.toBytes(familysplit[0]));
> }
>   }
> {code}
> Through the above code, when the column has an empty qualifier, the empty 
> qualifier cannot be parsed correctly.In other words, 'f:'(empty qualifier) 
> and 'f' (column family) are considered to have the same meaning, which is 
> wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-10 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609430#comment-16609430
 ] 

Toshihiro Suzuki commented on HBASE-21052:
--

{quote}
Next time please do not set explicit timeout value on test method, the timeout 
will be controlled from the framework. If you want to assert the time please 
use waitFor to better tell others that this operation is the criminal.
{quote}
Sure [~Apache9]. Thank you for letting me know.

{quote}
And does this also effect branch-2.1 and branch-2.0?
{quote}
I think yes. Please let me know if you want this fix in branch-2.1 and 
branch-2.0. [~Apache9] [~stack]




> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.S

[jira] [Commented] (HBASE-21174) [REST] Failed to parse empty qualifier in TableResource#getScanResource

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609456#comment-16609456
 ] 

Hadoop QA commented on HBASE-21174:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} hbase-rest: The patch generated 0 new + 16 unchanged 
- 6 fixed = 16 total (was 22) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 11s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21174 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939114/HBASE-21174.master.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cee9f462113e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2aae247e3f |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14380/testReport/ |
| Max. process+thread count | 2101 (vs. ulimit of 1) |
| modules | C: hbase-rest U: hbase-rest |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14380/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |



[jira] [Updated] (HBASE-12790) Support fairness across parallelized scans

2018-09-10 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12790:
---
Status: Open  (was: Patch Available)

> Support fairness across parallelized scans
> --
>
> Key: HBASE-12790
> URL: https://issues.apache.org/jira/browse/HBASE-12790
> Project: HBase
>  Issue Type: New Feature
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: Phoenix
> Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, 
> HBASE-12790_1.patch, HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, 
> HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in 
> getting back results. This can lead to starvation with a loaded cluster and 
> interleaved scans, since the RPC queue will be ordered and processed on a 
> FIFO basis. For example, if there are two clients, A & B that submit largish 
> scans at the same time. Say each scan is broken down into 100 scans by the 
> client (broken down into equal depth chunks along the row key), and the 100 
> scans of client A are queued first, followed immediately by the 100 scans of 
> client B. In this case, client B will be starved out of getting any results 
> back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead 
> of the standard FIFO queue. The queue to be used could be (maybe it already 
> is) configurable based on a new config parameter. Using this queue would 
> require the client to have the same identifier for all of the 100 parallel 
> scans that represent a single logical scan from the clients point of view. 
> With this information, the round robin queue would pick off a task from the 
> queue in a round robin fashion (instead of a strictly FIFO manner) to prevent 
> starvation over interleaved parallelized scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609501#comment-16609501
 ] 

Hudson commented on HBASE-21143:


Results for branch branch-2.0
[build #795 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609502#comment-16609502
 ] 

Hudson commented on HBASE-21144:


Results for branch branch-2.0
[build #795 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/795//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, 
> HBASE-21144-branch-2.1.patch, HBASE-21144-v1.patch, HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21180) findbugs incurs DataflowAnalysisException for hbase-server module

2018-09-10 Thread Ted Yu (JIRA)
Ted Yu created HBASE-21180:
--

 Summary: findbugs incurs DataflowAnalysisException for 
hbase-server module
 Key: HBASE-21180
 URL: https://issues.apache.org/jira/browse/HBASE-21180
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu


Running findbugs, I noticed the following in hbase-server module:
{code}
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (default-cli) @ hbase-server ---
[INFO] Fork Value is true
 [java] The following errors occurred during analysis:
 [java]   Error generating derefs for 
org.apache.hadoop.hbase.generated.master.table_jsp._jspService(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;)V
 [java] edu.umd.cs.findbugs.ba.DataflowAnalysisException: can't get 
position -1 of stack
 [java]   At edu.umd.cs.findbugs.ba.Frame.getStackValue(Frame.java:250)
 [java]   At 
edu.umd.cs.findbugs.ba.Hierarchy.resolveMethodCallTargets(Hierarchy.java:743)
 [java]   At 
edu.umd.cs.findbugs.ba.npe.DerefFinder.getAnalysis(DerefFinder.java:141)
 [java]   At 
edu.umd.cs.findbugs.classfile.engine.bcel.UsagesRequiringNonNullValuesFactory.analyze(UsagesRequiringNonNullValuesFactory.java:50)
 [java]   At 
edu.umd.cs.findbugs.classfile.engine.bcel.UsagesRequiringNonNullValuesFactory.analyze(UsagesRequiringNonNullValuesFactory.java:31)
 [java]   At 
edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:369)
 [java]   At 
edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:322)
 [java]   At 
edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysis(ClassContext.java:1005)
 [java]   At 
edu.umd.cs.findbugs.ba.ClassContext.getUsagesRequiringNonNullValues(ClassContext.java:325)
 [java]   At 
edu.umd.cs.findbugs.detect.FindNullDeref.foundGuaranteedNullDeref(FindNullDeref.java:1510)
 [java]   At 
edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.reportBugs(NullDerefAndRedundantComparisonFinder.java:361)
 [java]   At 
edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.examineNullValues(NullDerefAndRedundantComparisonFinder.java:266)
 [java]   At 
edu.umd.cs.findbugs.ba.npe.NullDerefAndRedundantComparisonFinder.execute(NullDerefAndRedundantComparisonFinder.java:164)
 [java]   At 
edu.umd.cs.findbugs.detect.FindNullDeref.analyzeMethod(FindNullDeref.java:278)
 [java]   At 
edu.umd.cs.findbugs.detect.FindNullDeref.visitClassContext(FindNullDeref.java:209)
 [java]   At 
edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:76)
 [java]   At 
edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:1089)
 [java]   At edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:283)
 [java]   At edu.umd.cs.findbugs.FindBugs.runMain(FindBugs.java:393)
 [java]   At edu.umd.cs.findbugs.FindBugs2.main(FindBugs2.java:1200)
 [java] The following classes needed for analysis were missing:
 [java]   accept
 [java]   apply
 [java]   run
 [java]   test
 [java]   call
 [java]   exec
 [java]   getAsInt
 [java]   applyAsLong
 [java]   storeFile
 [java]   get
 [java]   visit
 [java]   compare
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Rushabh S Shah (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609518#comment-16609518
 ] 

Rushabh S Shah commented on HBASE-21162:


{quote}I think the change is suspicious and should be reverted.
{quote}
Byte[~apurtell] thanks for the reply. Just for my knowledge why do you think 
this change is suspicious ?

Also \{{BoundedByteBufferPool}} is created just once in \{{RpcServer}}.

 

> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Rushabh S Shah (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609518#comment-16609518
 ] 

Rushabh S Shah edited comment on HBASE-21162 at 9/10/18 5:18 PM:
-

{quote}I think the change is suspicious and should be reverted.
{quote}
Byte[~apurtell] thanks for the reply. Just for my knowledge why do you think 
this change is suspicious ? Since  {{BoundedByteBufferPool}} is created just 
once in {{RpcServer, it shouldn't leak 40 gigs of native memory.}}

 


was (Author: shahrs87):
{quote}I think the change is suspicious and should be reverted.
{quote}
Byte[~apurtell] thanks for the reply. Just for my knowledge why do you think 
this change is suspicious ?

Also \{{BoundedByteBufferPool}} is created just once in \{{RpcServer}}.

 

> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Rushabh S Shah (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609518#comment-16609518
 ] 

Rushabh S Shah edited comment on HBASE-21162 at 9/10/18 5:19 PM:
-

{quote}I think the change is suspicious and should be reverted.
{quote}
[~apurtell] thanks for the reply. Just for my knowledge why do you think this 
change is suspicious ? Since  {{BoundedByteBufferPool}} is created just once in 
{{RpcServer, it shouldn't leak 40 gigs of native memory.}}

 


was (Author: shahrs87):
{quote}I think the change is suspicious and should be reverted.
{quote}
Byte[~apurtell] thanks for the reply. Just for my knowledge why do you think 
this change is suspicious ? Since  {{BoundedByteBufferPool}} is created just 
once in {{RpcServer, it shouldn't leak 40 gigs of native memory.}}

 

> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609542#comment-16609542
 ] 

Hudson commented on HBASE-21144:


Results for branch branch-2.1
[build #305 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, 
> HBASE-21144-branch-2.1.patch, HBASE-21144-v1.patch, HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21143) Update findbugs-maven-plugin to 3.0.4

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609541#comment-16609541
 ] 

Hudson commented on HBASE-21143:


Results for branch branch-2.1
[build #305 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/305//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Update findbugs-maven-plugin to 3.0.4
> -
>
> Key: HBASE-21143
> URL: https://issues.apache.org/jira/browse/HBASE-21143
> Project: HBase
>  Issue Type: Bug
>  Components: pom
>Affects Versions: 3.0.0, 2.1.0, 2.2.0, 2.0.2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21143.master.001.patch
>
>
> {code}
> Failed to execute goal org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs 
> (default) on project hbase: Execution default of goal 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0:findbugs failed: Plugin 
> org.codehaus.mojo:findbugs-maven-plugin:3.0.0 or one of its dependencies 
> could not be resolved: Failed to collect dependencies at 
> org.codehaus.mojo:findbugs-maven-plugin:jar:3.0.0 -> 
> org.codehaus.groovy:groovy-all:jar:1.7.4: Failed to read artifact descriptor 
> for org.codehaus.groovy:groovy-all:jar:1.7.4: Could not transfer artifact 
> org.codehaus.groovy:groovy-all:pom:1.7.4 from/to mirror 
> (http://xxx..xxx/nexus/content/groups/public): Failed to transfer file: 
> http://xxx..xxx/nexus/content/groups/public/org/codehaus/groovy/groovy-all/1.7.4/groovy-all-1.7.4.pom.
>  Return code is: 418 , ReasonPhrase:Artifact is in Tencent Blacklist! Please 
> update to the safe version, more information: 
> http://xxx..xxx/?tab=blackList.
> {code}
> Recently, when I compile HBase with a new machine, I got the above error. 
> Since the machine could not connect to the external network, we visited our 
> internal Maven repository, but org.codehaus.groovy:groovy-all:jar:1.7.4 was 
> added to the blacklist and could not be downloaded. See details, 
> org.codehaus.groovy:groovy-all:jar:1.7.4 is marked as vulnerable by 
> [CVE-2015-3253|https://www.cvedetails.com/cve/CVE-2015-3253], so we should 
> upgrade the version.
> {code:xml}
>   org.codehaus.mojo
>   findbugs-maven-plugin
>   3.0.0
>   
>   
> 
> ${project.basedir}/../dev-support/findbugs-exclude.xml
> {code}
> Look at the history commit record, findbugs-maven-plugin has been upgraded to 
> 3.0.4 in HBASE-18264, but one place is missing which still using the version 
> of 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609544#comment-16609544
 ] 

Andrew Purtell edited comment on HBASE-21162 at 9/10/18 5:28 PM:
-

bq. Since  BoundedByteBufferPool is created just once in RpcServer, it 
shouldn't leak 40 gigs of native memory.

We had a leak that went away when we disabled the RPC reservoir, so 
experimental observation counters your assertion. 

The change I pointed to is suspicious because in that change I replaced 
existing accounting with something using atomics, and now we have an 
experimentally confirmed leak from that code. 



was (Author: apurtell):
> Since  BoundedByteBufferPool is created just once in RpcServer, it shouldn't 
> leak 40 gigs of native memory.

We had a leak that went away when we disabled the RPC reservoir, so 
experimental observation counters your assertion. 

The change I pointed to is suspicious because in that change I replaced 
existing accounting with something using atomics, and now we have an 
experimentally confirmed leak from that code. 


> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609544#comment-16609544
 ] 

Andrew Purtell commented on HBASE-21162:


> Since  BoundedByteBufferPool is created just once in RpcServer, it shouldn't 
> leak 40 gigs of native memory.

We had a leak that went away when we disabled the RPC reservoir, so 
experimental observation counters your assertion. 

The change I pointed to is suspicious because in that change I replaced 
existing accounting with something using atomics, and now we have an 
experimentally confirmed leak from that code. 


> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609549#comment-16609549
 ] 

Andrew Purtell commented on HBASE-21162:


Last patch resolved the findbugs nit and tests are all green, so I am going to 
go ahead and commit this unless there is an objection. It is a Critical 
priority problem in branch-1.4 and branch-1. 

> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609544#comment-16609544
 ] 

Andrew Purtell edited comment on HBASE-21162 at 9/10/18 5:30 PM:
-

bq. Since  BoundedByteBufferPool is created just once in RpcServer, it 
shouldn't leak 40 gigs of native memory.

We had a leak that went away when we disabled the RPC reservoir, so 
experimental observation counters your assertion. 

The change I pointed to is suspicious because in that change I replaced 
existing accounting with something using atomics, and now we have an 
experimentally confirmed leak from that code. 

[~shahrs87] We aren't leaking instances of the class boundedbytebufferpool, we 
are leaking direct buffers from the pool itself. Hope that helps your 
understanding of the issue.


was (Author: apurtell):
bq. Since  BoundedByteBufferPool is created just once in RpcServer, it 
shouldn't leak 40 gigs of native memory.

We had a leak that went away when we disabled the RPC reservoir, so 
experimental observation counters your assertion. 

The change I pointed to is suspicious because in that change I replaced 
existing accounting with something using atomics, and now we have an 
experimentally confirmed leak from that code. 


> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21102) ServerCrashProcedure should select target server where no other replicas exist for the current region

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609555#comment-16609555
 ] 

Hudson commented on HBASE-21102:


Results for branch branch-2
[build #1229 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1229/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1229//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1229//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1229//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ServerCrashProcedure should select target server where no other replicas 
> exist for the current region
> -
>
> Key: HBASE-21102
> URL: https://issues.apache.org/jira/browse/HBASE-21102
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 3.0.0, 2.2.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
> Attachments: HBASE-21102_1.patch, HBASE-21102_2.patch, 
> HBASE-21102_3.patch, HBASE-21102_4.patch, HBASE-21102_initial.patch
>
>
> Currently when a server with region replica crashes, when the target server 
> is created for the replica region assignment there is no guarentee that a 
> server is selected where there is no other replica for the current region 
> getting assigned. It so happens that currently we do an assignment randomly 
> and later the LB comes and identifies these cases and again does MOVE for 
> such regions. It will be better if we can identify target servers at least 
> minimally ensuring that replicas are not colocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-10 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609598#comment-16609598
 ] 

stack commented on HBASE-20952:
---

Reading doc. (comments nor copy/paste allowed -- please fix... add reference 
here and author), the study of other logging systems resulted in "...no 
significant influence on HBase WAL API design." though above Josh says " I can 
say that a significant portion of the direction was strongly influenced by 
Apache DistributedLog".

Then we have a listing of the classes involved writing the WAL with stuff like 
this:

{{WAL implements WALFileLengthProvider}}

Nothing on "What was the reasoning behind this API?"[Josh] or what it is -- its 
the WAL we have already? (Why even have a WALFileLengthProvider and not just 
add a length method on the WAL Interface? Does Replication just need lengths to 
work? If so, discussion?).

Then comes 'design considerations'. #1 is not a design consideration but a note 
that code has been refactored. #2 is that we should be able to choose WAL via 
Configuration. #3 is an aspirational, abstract classes should not be tied to fs 
implementation, and #4 is a note on WAL metadata capability, a concept first 
mentioned here but unexplained/justified till later.

I see no discussion of 'region' entity in here.







> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609624#comment-16609624
 ] 

Ted Yu commented on HBASE-20952:


Thanks for helpful comments.

bq. comments nor copy/paste allowed

Googledoc interface changed recently. I have given everyone the "comment" 
permission.

Let me study / think about every point raised above.

Will update the doc.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-10 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609627#comment-16609627
 ] 

Mingliang Liu commented on HBASE-21173:
---

+1 (non-binding) 

Thanks for updating the patch, [~andrewcheng]. 

> Remove the duplicate HRegion#close in TestHRegion
> -
>
> Key: HBASE-21173
> URL: https://issues.apache.org/jira/browse/HBASE-21173
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21173.master.001.patch, 
> HBASE-21173.master.002.patch
>
>
>  After HBASE-21138, some test methods still have the duplicate 
> HRegion#close.So open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609644#comment-16609644
 ] 

Zach York commented on HBASE-21098:
---

Anybody have any further comments? [~liuml07] [~yuzhih...@gmail.com]

Otherwise we can get this merged in.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609646#comment-16609646
 ] 

Mingliang Liu commented on HBASE-21098:
---

(again) +1 (non-binding) as my previous comments have been addressed. Thanks 
[~zyork]!

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Rushabh S Shah (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609651#comment-16609651
 ] 

Rushabh S Shah commented on HBASE-21162:


{quote}[~shahrs87] We aren't leaking instances of the class 
boundedbytebufferpool, we are leaking direct buffers from the pool itself. Hope 
that helps your understanding of the issue.
{quote}
Thank you [~apurtell] for elaborating. +1 non-binding.

> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21176) HBase 2.0.1, several inconsistency issues

2018-09-10 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609736#comment-16609736
 ] 

stack commented on HBASE-21176:
---

Mind asking up on the dev list [~oleh2020].

Thanks.

> HBase 2.0.1, several inconsistency issues
> -
>
> Key: HBASE-21176
> URL: https://issues.apache.org/jira/browse/HBASE-21176
> Project: HBase
>  Issue Type: Bug
>Reporter: Oleg Galitskiy
>Priority: Major
>
> Faced with several inconsistency issues in HBase 2.0.1:
> {code}
> ERROR: Region \{ meta => null, hdfs => 
> hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
>  deployed => 
> hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f.,
>  replicaId => 0 } not in META, but deployed on 
> hbase-region,16020,1536493017073
> ...
> ERROR: hbase:namespace has no state in meta
> ERROR: table1 has no state in meta
> ERROR: table2 has no state in meta
> 2018-09-09 21:40:04,155 INFO [main] util.HBaseFsck: Handling overlap merges 
> in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
> ERROR: There is a hole in the region chain between and . You need to create a 
> new .regioninfo and region dir in hdfs to plug the hole.
> ERROR: Found inconsistency in table test3
> {code}
> BUT in 2.0.x HBAse version options _-repair, -fix, -fixHdfsHoles, etc_ was 
> deprecated.
> How I can fix it without these options?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21176) HBase 2.0.1, several inconsistency issues

2018-09-10 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-21176.
---
Resolution: Not A Bug

> HBase 2.0.1, several inconsistency issues
> -
>
> Key: HBASE-21176
> URL: https://issues.apache.org/jira/browse/HBASE-21176
> Project: HBase
>  Issue Type: Bug
>Reporter: Oleg Galitskiy
>Priority: Major
>
> Faced with several inconsistency issues in HBase 2.0.1:
> {code}
> ERROR: Region \{ meta => null, hdfs => 
> hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
>  deployed => 
> hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f.,
>  replicaId => 0 } not in META, but deployed on 
> hbase-region,16020,1536493017073
> ...
> ERROR: hbase:namespace has no state in meta
> ERROR: table1 has no state in meta
> ERROR: table2 has no state in meta
> 2018-09-09 21:40:04,155 INFO [main] util.HBaseFsck: Handling overlap merges 
> in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
> ERROR: There is a hole in the region chain between and . You need to create a 
> new .regioninfo and region dir in hdfs to plug the hole.
> ERROR: Found inconsistency in table test3
> {code}
> BUT in 2.0.x HBAse version options _-repair, -fix, -fixHdfsHoles, etc_ was 
> deprecated.
> How I can fix it without these options?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21176) HBase 2.0.1, several inconsistency issues

2018-09-10 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609736#comment-16609736
 ] 

stack edited comment on HBASE-21176 at 9/10/18 8:18 PM:


Mind asking up on the dev list [~oleh2020]. sorry, meant user list.

Thanks.


was (Author: stack):
Mind asking up on the dev list [~oleh2020].

Thanks.

> HBase 2.0.1, several inconsistency issues
> -
>
> Key: HBASE-21176
> URL: https://issues.apache.org/jira/browse/HBASE-21176
> Project: HBase
>  Issue Type: Bug
>Reporter: Oleg Galitskiy
>Priority: Major
>
> Faced with several inconsistency issues in HBase 2.0.1:
> {code}
> ERROR: Region \{ meta => null, hdfs => 
> hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
>  deployed => 
> hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f.,
>  replicaId => 0 } not in META, but deployed on 
> hbase-region,16020,1536493017073
> ...
> ERROR: hbase:namespace has no state in meta
> ERROR: table1 has no state in meta
> ERROR: table2 has no state in meta
> 2018-09-09 21:40:04,155 INFO [main] util.HBaseFsck: Handling overlap merges 
> in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
> ERROR: There is a hole in the region chain between and . You need to create a 
> new .regioninfo and region dir in hdfs to plug the hole.
> ERROR: Found inconsistency in table test3
> {code}
> BUT in 2.0.x HBAse version options _-repair, -fix, -fixHdfsHoles, etc_ was 
> deprecated.
> How I can fix it without these options?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21176) HBase 2.0.1, several inconsistency issues

2018-09-10 Thread Oleg Galitskiy (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609760#comment-16609760
 ] 

Oleg Galitskiy commented on HBASE-21176:


[~stack], sorry, not sure that I understand, where I can ask about it? Thanks.

> HBase 2.0.1, several inconsistency issues
> -
>
> Key: HBASE-21176
> URL: https://issues.apache.org/jira/browse/HBASE-21176
> Project: HBase
>  Issue Type: Bug
>Reporter: Oleg Galitskiy
>Priority: Major
>
> Faced with several inconsistency issues in HBase 2.0.1:
> {code}
> ERROR: Region \{ meta => null, hdfs => 
> hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
>  deployed => 
> hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f.,
>  replicaId => 0 } not in META, but deployed on 
> hbase-region,16020,1536493017073
> ...
> ERROR: hbase:namespace has no state in meta
> ERROR: table1 has no state in meta
> ERROR: table2 has no state in meta
> 2018-09-09 21:40:04,155 INFO [main] util.HBaseFsck: Handling overlap merges 
> in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
> ERROR: There is a hole in the region chain between and . You need to create a 
> new .regioninfo and region dir in hdfs to plug the hole.
> ERROR: Found inconsistency in table test3
> {code}
> BUT in 2.0.x HBAse version options _-repair, -fix, -fixHdfsHoles, etc_ was 
> deprecated.
> How I can fix it without these options?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-21176) HBase 2.0.1, several inconsistency issues

2018-09-10 Thread Oleg Galitskiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Galitskiy updated HBASE-21176:
---
Comment: was deleted

(was: [~stack], sorry, not sure that I understand, where I can ask about it? 
Thanks.)

> HBase 2.0.1, several inconsistency issues
> -
>
> Key: HBASE-21176
> URL: https://issues.apache.org/jira/browse/HBASE-21176
> Project: HBase
>  Issue Type: Bug
>Reporter: Oleg Galitskiy
>Priority: Major
>
> Faced with several inconsistency issues in HBase 2.0.1:
> {code}
> ERROR: Region \{ meta => null, hdfs => 
> hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
>  deployed => 
> hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f.,
>  replicaId => 0 } not in META, but deployed on 
> hbase-region,16020,1536493017073
> ...
> ERROR: hbase:namespace has no state in meta
> ERROR: table1 has no state in meta
> ERROR: table2 has no state in meta
> 2018-09-09 21:40:04,155 INFO [main] util.HBaseFsck: Handling overlap merges 
> in parallel. set hbasefsck.overlap.merge.parallel to false to run serially.
> ERROR: There is a hole in the region chain between and . You need to create a 
> new .regioninfo and region dir in hdfs to plug the hole.
> ERROR: Found inconsistency in table test3
> {code}
> BUT in 2.0.x HBAse version options _-repair, -fix, -fixHdfsHoles, etc_ was 
> deprecated.
> How I can fix it without these options?
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20704) Sometimes some compacted storefiles are not archived on region close

2018-09-10 Thread Francis Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Liu updated HBASE-20704:

Attachment: HBASE-20704.007.patch

> Sometimes some compacted storefiles are not archived on region close
> 
>
> Key: HBASE-20704
> URL: https://issues.apache.org/jira/browse/HBASE-20704
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0
>Reporter: Francis Liu
>Assignee: Francis Liu
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-20704.001.patch, HBASE-20704.002.patch, 
> HBASE-20704.003.patch, HBASE-20704.004.draft.patch, HBASE-20704.005.patch, 
> HBASE-20704.006.patch, HBASE-20704.007.patch
>
>
> During region close compacted files which have not yet been archived by the 
> discharger are archived as part of the region closing process. It is 
> important that these files are wholly archived to insure data consistency. ie 
> a storefile containing delete tombstones can be archived while older 
> storefiles containing cells that were supposed to be deleted are left 
> unarchived thereby undeleting those cells. 
> On region close a compacted storefile is skipped from archiving if it has 
> read references (ie open scanners). This behavior is correct for when the 
> discharger chore runs but on region close consistency is of course more 
> important so we should add a special case to ignore any references on the 
> storefile and go ahead and archive it. 
> Attached patch contains a unit test that reproduces the problem and the 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20704) Sometimes some compacted storefiles are not archived on region close

2018-09-10 Thread Francis Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609791#comment-16609791
 ] 

Francis Liu commented on HBASE-20704:
-

Uploading exaclty the same file with bumped up rev number to kick of buildbot

> Sometimes some compacted storefiles are not archived on region close
> 
>
> Key: HBASE-20704
> URL: https://issues.apache.org/jira/browse/HBASE-20704
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0
>Reporter: Francis Liu
>Assignee: Francis Liu
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-20704.001.patch, HBASE-20704.002.patch, 
> HBASE-20704.003.patch, HBASE-20704.004.draft.patch, HBASE-20704.005.patch, 
> HBASE-20704.006.patch, HBASE-20704.007.patch
>
>
> During region close compacted files which have not yet been archived by the 
> discharger are archived as part of the region closing process. It is 
> important that these files are wholly archived to insure data consistency. ie 
> a storefile containing delete tombstones can be archived while older 
> storefiles containing cells that were supposed to be deleted are left 
> unarchived thereby undeleting those cells. 
> On region close a compacted storefile is skipped from archiving if it has 
> read references (ie open scanners). This behavior is correct for when the 
> discharger chore runs but on region close consistency is of course more 
> important so we should add a special case to ignore any references on the 
> storefile and go ahead and archive it. 
> Attached patch contains a unit test that reproduces the problem and the 
> proposed fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-09-10 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609851#comment-16609851
 ] 

Zach York commented on HBASE-20734:
---

[~reidchan] Thanks for the review, sorry I'm slow updating.
{quote}avoid unnecessary style changes like below:
{quote}
Would you prefer I submit a separate patch for style changes? This doesn't 
follow Java camel case conventions.

 
{quote}Don't we add two new reference? i'm not sure about the ref count, what if

{regionDir}

and

{walFS}

never get initialized which may not happen in real world, but it is a logical 
problem.
{quote}
Yes, two references were added... I changed this just to fix the TestHeapSize 
test, but it should be 53. The reason these are lazy initialized is that 
getting the values results in an IOException and I didn't want to change the 
constructor of HRegion to throw IOException since it is so widely used. In the 
real world this should not be an issue as you mention. What is your suggestion 
for this?

 

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.branch-1.002.patch, HBASE-20734.branch-1.003.patch, 
> HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, 
> HBASE-20734.master.003.patch, HBASE-20734.master.004.patch, 
> HBASE-20734.master.005.patch, HBASE-20734.master.006.patch, 
> HBASE-20734.master.007.patch, HBASE-20734.master.008.patch, 
> HBASE-20734.master.009.patch, HBASE-20734.master.010.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-09-10 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-20734:
--
Attachment: HBASE-20734.master.011.patch

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.branch-1.002.patch, HBASE-20734.branch-1.003.patch, 
> HBASE-20734.master.001.patch, HBASE-20734.master.002.patch, 
> HBASE-20734.master.003.patch, HBASE-20734.master.004.patch, 
> HBASE-20734.master.005.patch, HBASE-20734.master.006.patch, 
> HBASE-20734.master.007.patch, HBASE-20734.master.008.patch, 
> HBASE-20734.master.009.patch, HBASE-20734.master.010.patch, 
> HBASE-20734.master.011.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-10 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609869#comment-16609869
 ] 

Ted Yu commented on HBASE-21173:


+1

> Remove the duplicate HRegion#close in TestHRegion
> -
>
> Key: HBASE-21173
> URL: https://issues.apache.org/jira/browse/HBASE-21173
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21173.master.001.patch, 
> HBASE-21173.master.002.patch
>
>
>  After HBASE-21138, some test methods still have the duplicate 
> HRegion#close.So open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-09-10 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609868#comment-16609868
 ] 

Josh Elser commented on HBASE-20952:


bq. the study of other logging systems resulted in "...no significant influence 
on HBase WAL API design." though above Josh says " I can say that a significant 
portion of the direction was strongly influenced by Apache DistributedLog".

I figure I should handle this one directly: to be honest, I shouldn't have 
dropped this comment here in HBase. That one is relevant for the Ratis 
LogService API. In that lengthy design doc that I put up and worked with a 
number of you on, it was a primary goal to design the HBase WAL API *only* to 
what HBase needs. This was to avoid the trap of trading one hard-dependency 
(HDFS) for another.

There was lots of chatter over on RATIS-272 (for those who want to be a part of 
that), and we got an initial API from that. The first wag at how we think the 
LogService should be implemented mimics the structure of how DistributedLog 
does things, which ultimately pushes us towards an API which looks like DL. 
Granted, I think what we have is more clean than DL, but that's where my 
comment came from.

I'd be remiss if I didn't at least acknowledge: of course we want to be aware 
of how general distributed log (the concept, not the project) APIs look like 
(otherwise, how do we know if what we're building would work generically). 
However, we still want to approach this initially by understanding what the 
requirements are inside of HBase, and then finding a happy medium between the 
systems we know we want to support.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
> Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609885#comment-16609885
 ] 

Hudson commented on HBASE-21158:


Results for branch branch-2.0
[build #796 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/796/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/796//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/796//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/796//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch, 
> HBASE-21158.master.004.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-10 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-21173:
---
Priority: Minor  (was: Major)

> Remove the duplicate HRegion#close in TestHRegion
> -
>
> Key: HBASE-21173
> URL: https://issues.apache.org/jira/browse/HBASE-21173
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Minor
> Attachments: HBASE-21173.master.001.patch, 
> HBASE-21173.master.002.patch
>
>
>  After HBASE-21138, some test methods still have the duplicate 
> HRegion#close.So open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609891#comment-16609891
 ] 

Hudson commented on HBASE-21158:


Results for branch branch-2
[build #1230 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1230/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1230//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1230//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1230//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch, 
> HBASE-21158.master.004.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramie Raufdeen reassigned HBASE-19418:
--

Assignee: Ramie Raufdeen

> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609903#comment-16609903
 ] 

Hudson commented on HBASE-21158:


Results for branch branch-2.1
[build #306 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/306/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/306//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/306//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/306//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch, 
> HBASE-21158.master.004.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609922#comment-16609922
 ] 

Zach York commented on HBASE-21098:
---

Sorry for missing your earlier +1 Mingliang!

I'll be pushing this tomorrow if nobody has any additional comments.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21158) Empty qualifier cell is always returned when using QualifierFilter

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609923#comment-16609923
 ] 

Andrew Purtell commented on HBASE-21158:


+1 from me

> Empty qualifier cell is always returned when using QualifierFilter
> --
>
> Key: HBASE-21158
> URL: https://issues.apache.org/jira/browse/HBASE-21158
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21158.master.001.patch, 
> HBASE-21158.master.002.patch, HBASE-21158.master.003.patch, 
> HBASE-21158.master.004.patch
>
>
> {code:xml}
> hbase(main):002:0> put 'testTable','testrow','f:testcol1','testvalue1'
> 0 row(s) in 0.0040 seconds
> hbase(main):003:0> put 'testTable','testrow','f:','testvalue2'
> 0 row(s) in 0.0070 seconds
> # get row with empty column f:, result is correct.
> hbase(main):004:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
> 1 row(s) in 0.0460 seconds
> # get row with column f:testcol1, result is incorrect.
> hbase(main):005:0> scan 'testTable',{FILTER => "QualifierFilter (=, 
> 'binary:testcol1')"}
> ROW COLUMN+CELL   
>   
>
>  testrowcolumn=f:, 
> timestamp=1536218563581, value=testvalue2 
>   
>  testrowcolumn=f:testcol1, 
> timestamp=1536218550827, value=testvalue1 
>   
> 1 row(s) in 0.0070 seconds
> {code}
> As the above operation, when the row contains empty qualifier column, empty 
> qualifier cell is always returned when using QualifierFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609925#comment-16609925
 ] 

Andrew Purtell commented on HBASE-21098:


[~zyork] Are you planning to commit this to branch-1 as well? Looks like it 
could go in to the next minor, 1.5. 

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609930#comment-16609930
 ] 

Zach York commented on HBASE-21098:
---

[~apurtell] I'll see how difficult the port is, Tyler mentioned earlier that he 
was going to be unavailable starting today. If it's too difficult, I'll create 
a follow-up Jira to backport.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609938#comment-16609938
 ] 

Andrew Purtell edited comment on HBASE-21098 at 9/11/18 12:10 AM:
--

Ok, ping me and I'll give it a shot if you need a hand. I have an interest in 
supporting S3 use cases well with branch-1.
Edit: That applies generally, so feel free to at-mention


was (Author: apurtell):
Ok, ping me and I'll give it a shot if you need a hand. I have an interest in 
supporting S3 use cases well with branch-1.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-09-10 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609938#comment-16609938
 ] 

Andrew Purtell commented on HBASE-21098:


Ok, ping me and I'll give it a shot if you need a hand. I have an interest in 
supporting S3 use cases well with branch-1.

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Priority: Major
> Attachments: HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-09-10 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-20734:
--
Attachment: HBASE-20734.branch-1.004.patch

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.branch-1.002.patch, HBASE-20734.branch-1.003.patch, 
> HBASE-20734.branch-1.004.patch, HBASE-20734.master.001.patch, 
> HBASE-20734.master.002.patch, HBASE-20734.master.003.patch, 
> HBASE-20734.master.004.patch, HBASE-20734.master.005.patch, 
> HBASE-20734.master.006.patch, HBASE-20734.master.007.patch, 
> HBASE-20734.master.008.patch, HBASE-20734.master.009.patch, 
> HBASE-20734.master.010.patch, HBASE-20734.master.011.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-09-10 Thread Zach York (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zach York updated HBASE-20734:
--
Attachment: HBASE-20734.master.012.patch

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch, 
> HBASE-20734.branch-1.002.patch, HBASE-20734.branch-1.003.patch, 
> HBASE-20734.branch-1.004.patch, HBASE-20734.master.001.patch, 
> HBASE-20734.master.002.patch, HBASE-20734.master.003.patch, 
> HBASE-20734.master.004.patch, HBASE-20734.master.005.patch, 
> HBASE-20734.master.006.patch, HBASE-20734.master.007.patch, 
> HBASE-20734.master.008.patch, HBASE-20734.master.009.patch, 
> HBASE-20734.master.010.patch, HBASE-20734.master.011.patch, 
> HBASE-20734.master.012.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21162) Revert suspicious change to BoundedByteBufferPool and disable use of direct buffers for IPC reservoir by default

2018-09-10 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21162:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Revert suspicious change to BoundedByteBufferPool and disable use of direct 
> buffers for IPC reservoir by default
> 
>
> Key: HBASE-21162
> URL: https://issues.apache.org/jira/browse/HBASE-21162
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.7
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Critical
> Fix For: 1.5.0, 1.4.8
>
> Attachments: HBASE-21162-branch-1.patch, HBASE-21162-branch-1.patch, 
> HBASE-21162-branch-1.patch
>
>
> We had a production incident where we traced the issue to a direct buffer 
> leak. On a hunch we tried setting hbase.ipc.server.reservoir.enabled = false 
> and after that no native memory leak could be observed in any regionserver 
> process under the triggering load. 
> On HBASE-19239 (Fix findbugs and error-prone issues) I made a change to 
> BoundedByteBufferPool that is suspicious given this finding. It was committed 
> to branch-1.4 and branch-1. I'm going to revert this change. 
> In addition the allocation of direct memory for the server RPC reservoir is a 
> bit problematic in that tracing native memory or direct buffer leaks to a 
> particular class or compilation unit is difficult, so I also propose 
> allocating the reservoir on the heap by default instead. Should there be a 
> leak it is much easier to do an analysis of a heap dump with familiar tools 
> to find it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21181) Use the same filesystem for wal archive directory and wal directory

2018-09-10 Thread Tak Lon (Stephen) Wu (JIRA)
Tak Lon (Stephen) Wu created HBASE-21181:


 Summary: Use the same filesystem for wal archive directory and wal 
directory
 Key: HBASE-21181
 URL: https://issues.apache.org/jira/browse/HBASE-21181
 Project: HBase
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1
Reporter: Tak Lon (Stephen) Wu
Assignee: Tak Lon (Stephen) Wu


when {{hbase.wal.dir}} is set to any filesystem other than the same filesystem 
used by rootDir e.g. {{hbase.wal.dir}} set to {{hdfs://something/wal}} and 
{{hbase.rootdir}} set to {{s3://something}}, before this change, WAL archive 
directory ({{walArchiveDir}}) cannot be created and failed the 
WALProcedureStore on HMaster.

The issue is that WAL archive directory was considered to be collocated with 
{{hbase.rootdir}} and creates a subdirectory under it. this logic needs to be 
updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-09-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16609988#comment-16609988
 ] 

Hadoop QA commented on HBASE-20734:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
1s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} The patch hbase-common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} hbase-server: The patch generated 0 new + 365 
unchanged - 5 fixed = 365 total (was 370) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 24s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
22s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}118m 
25s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939176/HBASE-20734.master.011.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 15b2b3a9057d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit

[jira] [Updated] (HBASE-21181) Use the same filesystem for wal archive directory and wal directory

2018-09-10 Thread Tak Lon (Stephen) Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tak Lon (Stephen) Wu updated HBASE-21181:
-
Attachment: HBASE-21181.master.001.patch

> Use the same filesystem for wal archive directory and wal directory
> ---
>
> Key: HBASE-21181
> URL: https://issues.apache.org/jira/browse/HBASE-21181
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Major
> Attachments: HBASE-21181.master.001.patch
>
>
> when {{hbase.wal.dir}} is set to any filesystem other than the same 
> filesystem used by rootDir e.g. {{hbase.wal.dir}} set to 
> {{hdfs://something/wal}} and {{hbase.rootdir}} set to {{s3://something}}, 
> before this change, WAL archive directory ({{walArchiveDir}}) cannot be 
> created and failed the WALProcedureStore on HMaster.
> The issue is that WAL archive directory was considered to be collocated with 
> {{hbase.rootdir}} and creates a subdirectory under it. this logic needs to be 
> updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19418 started by Ramie Raufdeen.
--
> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
> Attachments: patch.diff
>
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramie Raufdeen updated HBASE-19418:
---
Attachment: patch.diff

> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
> Attachments: patch.diff
>
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work stopped] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19418 stopped by Ramie Raufdeen.
--
> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
> Attachments: patch.diff
>
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19418 started by Ramie Raufdeen.
--
> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
> Attachments: patch.diff
>
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramie Raufdeen updated HBASE-19418:
---
Attachment: patch.diff
Status: Patch Available  (was: In Progress)

> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
> Attachments: patch.diff
>
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19418) RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.

2018-09-10 Thread Ramie Raufdeen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramie Raufdeen updated HBASE-19418:
---
Attachment: (was: patch.diff)

> RANGE_OF_DELAY in PeriodicMemstoreFlusher should be configurable.
> -
>
> Key: HBASE-19418
> URL: https://issues.apache.org/jira/browse/HBASE-19418
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Ramie Raufdeen
>Priority: Major
> Attachments: patch.diff
>
>
> When RSs have a LOT of regions and CFs, flushing everything within 5 minutes 
> is not always doable. It might be interesting to be able to increase the 
> RANGE_OF_DELAY. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >