[jira] [Updated] (HBASE-17615) Use nonce and procedure v2 for add/remove replication peer

2017-11-30 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17615:
-
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-15867

> Use nonce and procedure v2 for add/remove replication peer
> --
>
> Key: HBASE-17615
> URL: https://issues.apache.org/jira/browse/HBASE-17615
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18862) backport HBASE-15109 to branch-1.1,branch-1.2,branch-1.3

2017-11-30 Thread Yechao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274075#comment-16274075
 ] 

Yechao Chen commented on HBASE-18862:
-

bq. branch-1.1 is EOL. unscheduling.

branch-1.1 is EOL?

http://mirrors.ocf.berkeley.edu/apache/hbase/

The 1.2.x series is the current stable release line, it supercedes earlier 
release lines (the 1.1.x line is still seeing a regular cadence of bug fix 
releases for those who are not easily able to update)

> backport HBASE-15109 to branch-1.1,branch-1.2,branch-1.3
> 
>
> Key: HBASE-18862
> URL: https://issues.apache.org/jira/browse/HBASE-18862
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6, 1.1.12
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Critical
> Fix For: 1.3.2, 1.4.1, 1.2.8
>
> Attachments: HBASE-18862-branch-1.1-v1.patch, 
> HBASE-18862-branch-1.1.patch, HBASE-18862-branch-1.2-v1.patch, 
> HBASE-18862-branch-1.2.patch, HBASE-18862-branch-1.3-v1.patch, 
> HBASE-18862-branch-1.3.patch, HBASE-18862-branch-1.patch
>
>
> HBASE-15109 should apply to  branch-1.1,branch-1.2,branch-1.3 also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19397) Design procedures for ReplicationManager to notify peer change event from master

2017-11-30 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-19397:
-
Description: 
After we store peer states / peer queues information into hbase table,   RS can 
not track peer config change by adding watcher znode.   

So we need design procedures for ReplicationManager to notify peer change 
event.   the replication rpc interfaces which may be implemented by procedures 
are following: 

{code}
1. addReplicationPeer
2. removeReplicationPeer
3. enableReplicationPeer
4. disableReplicationPeer
5. updateReplicationPeerConfig
{code}

BTW,  our RS states will still be store in zookeeper,  so when RS crash, the 
tracker which will trigger to transfer queues of crashed RS will still be a 
Zookeeper Tracker.  we need NOT implement that by  procedures.  

As we will  release 2.0 in next weeks,  and the HBASE-15867 can not be resolved 
before the release,  so I'd prefer to create a new feature branch for 
HBASE-15867. 





  was:
After we store peer states / peer queues information into hbase table,   RS can 
not tracker peer config change by adding watcher znode.   

So we need design procedures for ReplicationManager to notify peer change 
event.   the replication rpc interfaces which may be implemented by procedures 
are following: 

{code}
1. addReplicationPeer
2. removeReplicationPeer
3. enableReplicationPeer
4. disableReplicationPeer
5. updateReplicationPeerConfig
{code}

BTW,  our RS states will still be store in zookeeper,  so when RS crash, the 
tracker which will trigger to transfer queues of crashed RS will still be a 
Zookeeper Tracker.  we need NOT implement that by  procedures.  

As we will  release 2.0 in next weeks,  and the HBASE-15867 can not be resolved 
before the release,  so I'd prefer to create a new feature branch for 
HBASE-15867. 






> Design  procedures for ReplicationManager to notify peer change event from 
> master
> -
>
> Key: HBASE-19397
> URL: https://issues.apache.org/jira/browse/HBASE-19397
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>
> After we store peer states / peer queues information into hbase table,   RS 
> can not track peer config change by adding watcher znode.   
> So we need design procedures for ReplicationManager to notify peer change 
> event.   the replication rpc interfaces which may be implemented by 
> procedures are following: 
> {code}
> 1. addReplicationPeer
> 2. removeReplicationPeer
> 3. enableReplicationPeer
> 4. disableReplicationPeer
> 5. updateReplicationPeerConfig
> {code}
> BTW,  our RS states will still be store in zookeeper,  so when RS crash, the 
> tracker which will trigger to transfer queues of crashed RS will still be a 
> Zookeeper Tracker.  we need NOT implement that by  procedures.  
> As we will  release 2.0 in next weeks,  and the HBASE-15867 can not be 
> resolved before the release,  so I'd prefer to create a new feature branch 
> for HBASE-15867. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19394) Issue on the publication feature of RS status with multicast (hbase.status.published) in multi-homeing env

2017-11-30 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-19394:
-
Component/s: Client

> Issue on the publication feature of RS status with multicast 
> (hbase.status.published) in multi-homeing env
> --
>
> Key: HBASE-19394
> URL: https://issues.apache.org/jira/browse/HBASE-19394
> Project: HBase
>  Issue Type: Bug
>  Components: Client, master
>Reporter: Toshihiro Suzuki
>
> Currently, when the publication feature is enabled 
> (hbase.status.published=true), it uses the interface which is found first:
> https://github.com/apache/hbase/blob/2e8bd0036dbdf3a99786e5531495d8d4cb51b86c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java#L268-L275
> This won't work when the host has the multiple network interfaces and the 
> unreachable one to the other nodes is selected. The interface which can be 
> used for the communication between cluster nodes should be configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19396) Fix flaky test TestHTableMultiplexerFlushCache

2017-11-30 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19396:
---
Status: Patch Available  (was: Open)

> Fix flaky test TestHTableMultiplexerFlushCache
> --
>
> Key: HBASE-19396
> URL: https://issues.apache.org/jira/browse/HBASE-19396
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.5.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HBASE-19396.branch-1.001.patch
>
>
> [INFO] Running org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
> [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.67 
> s <<< FAILURE! - in 
> org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
> [ERROR] 
> testOnRegionMove(org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache)
>   Time elapsed: 4.644 s  <<< FAILURE!
> java.lang.AssertionError: Did not find a new RegionServer to use
>   at 
> org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache.testOnRegionMove(TestHTableMultiplexerFlushCache.java:160)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19396) Fix flaky test TestHTableMultiplexerFlushCache

2017-11-30 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19396:
---
Attachment: HBASE-19396.branch-1.001.patch

> Fix flaky test TestHTableMultiplexerFlushCache
> --
>
> Key: HBASE-19396
> URL: https://issues.apache.org/jira/browse/HBASE-19396
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.5.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HBASE-19396.branch-1.001.patch
>
>
> [INFO] Running org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
> [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.67 
> s <<< FAILURE! - in 
> org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
> [ERROR] 
> testOnRegionMove(org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache)
>   Time elapsed: 4.644 s  <<< FAILURE!
> java.lang.AssertionError: Did not find a new RegionServer to use
>   at 
> org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache.testOnRegionMove(TestHTableMultiplexerFlushCache.java:160)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19397) Design procedures for ReplicationManager to notify peer change event from master

2017-11-30 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-19397:


 Summary: Design  procedures for ReplicationManager to notify peer 
change event from master
 Key: HBASE-19397
 URL: https://issues.apache.org/jira/browse/HBASE-19397
 Project: HBase
  Issue Type: Sub-task
Reporter: Zheng Hu
Assignee: Zheng Hu


After we store peer states / peer queues information into hbase table,   RS can 
not tracker peer config change by adding watcher znode.   

So we need design procedures for ReplicationManager to notify peer change 
event.   the replication rpc interfaces which may be implemented by procedures 
are following: 

{code}
1. addReplicationPeer
2. removeReplicationPeer
3. enableReplicationPeer
4. disableReplicationPeer
5. updateReplicationPeerConfig
{code}

BTW,  our RS states will still be store in zookeeper,  so when RS crash, the 
tracker which will trigger to transfer queues of crashed RS will still be a 
Zookeeper Tracker.  we need NOT implement that by  procedures.  

As we will  release 2.0 in next weeks,  and the HBASE-15867 can not be resolved 
before the release,  so I'd prefer to create a new feature branch for 
HBASE-15867. 







--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19394) Issue on the publication feature of RS status with multicast (hbase.status.published) in multi-homeing env

2017-11-30 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274068#comment-16274068
 ] 

Toshihiro Suzuki commented on HBASE-19394:
--

Also, the interface which is used on the client side should be configurable:
https://github.com/apache/hbase/blob/2e8bd0036dbdf3a99786e5531495d8d4cb51b86c/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java#L222


> Issue on the publication feature of RS status with multicast 
> (hbase.status.published) in multi-homeing env
> --
>
> Key: HBASE-19394
> URL: https://issues.apache.org/jira/browse/HBASE-19394
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Toshihiro Suzuki
>
> Currently, when the publication feature is enabled 
> (hbase.status.published=true), it uses the interface which is found first:
> https://github.com/apache/hbase/blob/2e8bd0036dbdf3a99786e5531495d8d4cb51b86c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java#L268-L275
> This won't work when the host has the multiple network interfaces and the 
> unreachable one to the other nodes is selected. The interface which can be 
> used for the communication between cluster nodes should be configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19396) Fix flaky test TestHTableMultiplexerFlushCache

2017-11-30 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-19396:
--

 Summary: Fix flaky test TestHTableMultiplexerFlushCache
 Key: HBASE-19396
 URL: https://issues.apache.org/jira/browse/HBASE-19396
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.5.0
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang
Priority: Minor


[INFO] Running org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
[ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.67 s 
<<< FAILURE! - in org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
[ERROR] 
testOnRegionMove(org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache)
  Time elapsed: 4.644 s  <<< FAILURE!
java.lang.AssertionError: Did not find a new RegionServer to use
at 
org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache.testOnRegionMove(TestHTableMultiplexerFlushCache.java:160)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19395) [branch-1] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE

2017-11-30 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274060#comment-16274060
 ] 

Guanghao Zhang commented on HBASE-19395:


FYI [~stack] [~apurtell]

> [branch-1] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails 
> with NPE
> --
>
> Key: HBASE-19395
> URL: https://issues.apache.org/jira/browse/HBASE-19395
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Guanghao Zhang
>
> [INFO] Running 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 50.388 s <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> [ERROR] 
> testMasterOpsWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 8.903 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.test(TestEndToEndSplitTransaction.java:239)
>   at 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.testMasterOpsWhileSplitting(TestEndToEndSplitTransaction.java:148)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: (was: HBASE-19340-branch-1.2.batch)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: HBASE-19340-branch-1.2.batch

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15970) Move Replication Peers into an HBase table too

2017-11-30 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-15970:
-
Attachment: HBASE-15970.v3.patch

> Move Replication Peers into an HBase table too
> --
>
> Key: HBASE-15970
> URL: https://issues.apache.org/jira/browse/HBASE-15970
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Zheng Hu
> Attachments: HBASE-15970.v1.patch, HBASE-15970.v2.patch, 
> HBASE-15970.v3.patch
>
>
> Currently ReplicationQueuesHBaseTableImpl relies on ReplicationStateZkImpl to 
> track information about the available replication peers (used during 
> claimQueues). We can also move this into an HBase table instead of relying on 
> ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Status: Open  (was: Patch Available)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274054#comment-16274054
 ] 

Anoop Sam John commented on HBASE-19386:


bq.// java.nio.Bits.unaligned() wrongly returns false on ppc (JDK-8165231),
We had this..  So the JDK bug u refer here, can u link that too here pls?  Also 
write abt that ref in the code where u have done the fix. Easy later to read 
and understand why the hard coded checks

> HBase UnsafeAvailChecker returns false on Arm64
> ---
>
> Key: HBASE-19386
> URL: https://issues.apache.org/jira/browse/HBASE-19386
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Assignee: Yuqi Gu
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19386.patch
>
>
> Arm64v8 supports unaligned access .
> But UnsafeAvailChecker returns false due to a JDK bug.
> The false of UnsafeAvailChecker return also causes the HBase Unit 
> tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
> TestFuzzyRowAndColumnRangeFilter) failures. 
> Enable Arm64 unaligned support by providing a hard-code workaround for the 
> JDK bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19395) [branch-1] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE

2017-11-30 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-19395:
--

 Summary: [branch-1] 
TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE
 Key: HBASE-19395
 URL: https://issues.apache.org/jira/browse/HBASE-19395
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.5.0
Reporter: Guanghao Zhang


[INFO] Running org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 50.388 
s <<< FAILURE! - in 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
[ERROR] 
testMasterOpsWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
  Time elapsed: 8.903 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.test(TestEndToEndSplitTransaction.java:239)
at 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.testMasterOpsWhileSplitting(TestEndToEndSplitTransaction.java:148)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Status: Patch Available  (was: Open)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15970) Move Replication Peers into an HBase table too

2017-11-30 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274057#comment-16274057
 ] 

Zheng Hu commented on HBASE-15970:
--

patch.v3: Fix checkstyle and retrigger hadoop QA. 

> Move Replication Peers into an HBase table too
> --
>
> Key: HBASE-15970
> URL: https://issues.apache.org/jira/browse/HBASE-15970
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Zheng Hu
> Attachments: HBASE-15970.v1.patch, HBASE-15970.v2.patch, 
> HBASE-15970.v3.patch
>
>
> Currently ReplicationQueuesHBaseTableImpl relies on ReplicationStateZkImpl to 
> track information about the available replication peers (used during 
> claimQueues). We can also move this into an HBase table instead of relying on 
> ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19394) Issue on the publication feature of RS status with multicast (hbase.status.published) in multi-homeing env

2017-11-30 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created HBASE-19394:


 Summary: Issue on the publication feature of RS status with 
multicast (hbase.status.published) in multi-homeing env
 Key: HBASE-19394
 URL: https://issues.apache.org/jira/browse/HBASE-19394
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Toshihiro Suzuki


Currently, when the publication feature is enabled 
(hbase.status.published=true), it uses the interface which is found first:
https://github.com/apache/hbase/blob/2e8bd0036dbdf3a99786e5531495d8d4cb51b86c/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java#L268-L275

This won't work when the host has the multiple network interfaces and the 
unreachable one to the other nodes is selected. The interface which can be used 
for the communication between cluster nodes should be configurable.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18467) nightly job needs to comment on jira

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274039#comment-16274039
 ] 

Hadoop QA commented on HBASE-18467:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 45s{color} 
| {color:red} HBASE-18467 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18467 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900154/HBASE-18467.1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10169/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> nightly job needs to comment on jira
> 
>
> Key: HBASE-18467
> URL: https://issues.apache.org/jira/browse/HBASE-18467
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-18467.0.WIP.patch, HBASE-18467.0.patch, 
> HBASE-18467.1.patch, HBASE-18467.1.patch
>
>
> follow on from HBASE-18147, need a post action that pings all newly-committed 
> jiras with result of the branch build



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19366) Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region scanner to read data

2017-11-30 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274034#comment-16274034
 ] 

Guanghao Zhang commented on HBASE-19366:


The findbugs warning is not related.

> Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region 
> scanner to read data
> --
>
> Key: HBASE-19366
> URL: https://issues.apache.org/jira/browse/HBASE-19366
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: stack
>Assignee: Guanghao Zhang
> Fix For: 1.4.1, 1.5.0
>
> Attachments: HBASE-19035.branch-1.2.001.patch, 
> HBASE-19366.branch-1.001.patch, HBASE-19366.branch-1.001.patch, 
> HBASE-19366.branch-1.001.patch, HBASE-19366.branch-1.3.001.patch
>
>
> Making subissue to backport the parent issue to branch-1. I'll attach first 
> attempt at a backport. It is failing in TestRegionServerMetrics in an assert.
> Making a new issue because time has elapsed since parent went into master and 
> branch-1 and I want to resolve the parent. Thanks. FYI [~zghaobac] If you've 
> input, just say sir and I can take another look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19366) Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region scanner to read data

2017-11-30 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274033#comment-16274033
 ] 

Guanghao Zhang commented on HBASE-19366:


I try the ut on branch-1 without the patch. The ut still failed.

[INFO] Running org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.835 
s <<< FAILURE! - in 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
[ERROR] 
testMasterOpsWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
  Time elapsed: 9.383 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.test(TestEndToEndSplitTransaction.java:239)
at 
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction.testMasterOpsWhileSplitting(TestEndToEndSplitTransaction.java:148)

[INFO] Running org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
[ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.215 
s <<< FAILURE! - in 
org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
[ERROR] 
testOnRegionMove(org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache)
  Time elapsed: 2.644 s  <<< FAILURE!
java.lang.AssertionError: Did not find a new RegionServer to use
at 
org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache.testOnRegionMove(TestHTableMultiplexerFlushCache.java:160)

Let me take a look about if there are issues to handle these.

> Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region 
> scanner to read data
> --
>
> Key: HBASE-19366
> URL: https://issues.apache.org/jira/browse/HBASE-19366
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: stack
>Assignee: Guanghao Zhang
> Fix For: 1.4.1, 1.5.0
>
> Attachments: HBASE-19035.branch-1.2.001.patch, 
> HBASE-19366.branch-1.001.patch, HBASE-19366.branch-1.001.patch, 
> HBASE-19366.branch-1.001.patch, HBASE-19366.branch-1.3.001.patch
>
>
> Making subissue to backport the parent issue to branch-1. I'll attach first 
> attempt at a backport. It is failing in TestRegionServerMetrics in an assert.
> Making a new issue because time has elapsed since parent went into master and 
> branch-1 and I want to resolve the parent. Thanks. FYI [~zghaobac] If you've 
> input, just say sir and I can take another look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274031#comment-16274031
 ] 

Anoop Sam John commented on HBASE-19384:


If bypass() is called on a ObserverContext within a bypassable cp hook,  we 
should just stop calling the remaining CP hooks?   So bypass meaning will 
become old complete + old bypass(bypass core logic)

> Results returned by preAppend hook in a coprocessor are replaced with null 
> from other coprocessor even on bypass
> 
>
> Key: HBASE-19384
> URL: https://issues.apache.org/jira/browse/HBASE-19384
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
>
> Phoenix adding multiple coprocessors for a table and one of them has 
> preAppend and preIncrement implementation and bypass the operations by 
> returning the results. But the other coprocessors which doesn't have any 
> implementation returning null and the results returned by previous 
> coprocessor are override by null and always going with default implementation 
> of append and increment operations. But it's not the case with old versions 
> and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18467) nightly job needs to comment on jira

2017-11-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18467:
--
Attachment: HBASE-18467.1.patch

Retry

> nightly job needs to comment on jira
> 
>
> Key: HBASE-18467
> URL: https://issues.apache.org/jira/browse/HBASE-18467
> Project: HBase
>  Issue Type: Improvement
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HBASE-18467.0.WIP.patch, HBASE-18467.0.patch, 
> HBASE-18467.1.patch, HBASE-18467.1.patch
>
>
> follow on from HBASE-18147, need a post action that pings all newly-committed 
> jiras with result of the branch build



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19204) branch-1.2 times out and is taking 6-7 hours to complete

2017-11-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274027#comment-16274027
 ] 

stack commented on HBASE-19204:
---

Resolving. branch-1 doesn't do crazy timing out anymore. Last night branch-1.2 
nightly build w/o test failure in ~4 hours total. Thanks for coming by 
[~xiaochen] Thanks for figuring hdfs hang likely culprit [~chia7712]

> branch-1.2 times out and is taking 6-7 hours to complete
> 
>
> Key: HBASE-19204
> URL: https://issues.apache.org/jira/browse/HBASE-19204
> Project: HBase
>  Issue Type: Umbrella
>  Components: test
>Reporter: stack
>Assignee: stack
> Attachments: 19024.branch-1.2.004.patch, 
> HBASE-19024.branch-1.2.002.patch, HBASE-19024.branch-1.2.002.patch, 
> HBASE-19024.branch-1.2.003.patch, HBASE-19204.branch-1.2.001.patch, 
> HBASE-19204.branch-1.2.002.patch, HBASE-19204.branch-1.2.003.patch, 
> HBASE-19204.branch-1.2.004.patch, HBASE-19204.branch-1.2.005.patch, 
> HBASE-19204.branch-1.2.005.patch, HBASE-19204.branch-1.2.005.patch, 
> HBASE-19204.branch-1.2.005.patch, HBASE-19204.branch-1.2.006.patch, 
> HBASE-19204.branch-1.2.007.patch
>
>
> Sean has been looking at tooling and infra. This Umbrellas is about looking 
> at actual tests. For example, running locally on dedicated machine I picked a 
> random test, TestPerColumnFamilyFlush. In my test run, it wrote 16M lines. It 
> seems to be having zk issues but it is catching interrupts and ignoring them 
> ([~carp84] fixed this in later versions over in HBASE-18441).
> Let me try and do some fixup under this umbrella so we can get a 1.2.7 out 
> the door.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19392) TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master

2017-11-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19392:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-beta-1
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks [~huaxiang] I noticed that last nights 
branch-2 build failed because this test failed. Thanks for fix.

> TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master
> -
>
> Key: HBASE-19392
> URL: https://issues.apache.org/jira/browse/HBASE-19392
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.0-alpha-4
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19392-master-v001.patch
>
>
> Please see the flakey test list.
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html
> client.TestReplicaWithCluster 96.7% (29 / 30) 29 / 0 / 0  
> show/hide



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17745) Support short circuit connection for master services

2017-11-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274019#comment-16274019
 ] 

Anoop Sam John commented on HBASE-17745:


We can open a jira task and make this change?  Then we can handle 
MasterKeepAliveConnection  also.

> Support short circuit connection for master services
> 
>
> Key: HBASE-17745
> URL: https://issues.apache.org/jira/browse/HBASE-17745
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-17745.patch, HBASE-17745.v2.patch, 
> HBASE-17745.v2.trival.patch, HBASE-17745.v2.trival.patch, HBASE-17745.v3.patch
>
>
> As titled, now we have short circuit connection, but no short circuit for 
> master services, and we propose to support it in this JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19125) TestReplicator is flaky

2017-11-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274014#comment-16274014
 ] 

stack commented on HBASE-19125:
---

Hopefully this is fixed by HBASE-19385

> TestReplicator is flaky
> ---
>
> Key: HBASE-19125
> URL: https://issues.apache.org/jira/browse/HBASE-19125
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 1.4.1, 1.5.0
>
>
> TestReplicator fails now and again. I had a look at the test. This is 
> something I contributed a while back but looking at it again it needs a 
> different approach. I'm going to disable it for now until this issue is 
> resolved. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-19385.
---
  Resolution: Fixed
Hadoop Flags: Reviewed

Pushed to branch-1.4 too at [~apurtell] request. Left the test disabled. 
Resolving.

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-19385.branch-1.3.001.patch
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19385:
--
Fix Version/s: (was: 1.4.1)
   1.4.0

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-19385.branch-1.3.001.patch
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274008#comment-16274008
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
When we switch away from restore-via-snapshot and have proper transactions, 
does that mean this extra table will go away?
{quote}

Yes, but you need to understand, that proper Tx management is a hard task in 
this case. It is even harder than classic Tx management. DB Tx got rollbacked 
automatically in case of a collision (updates to the same record), but we have 
to merge these updates correctly, because backup sessions always update shared 
records. Is it worth doing? Only Admin can run backups and what is the use case 
when Admin starts two sessions in parallel if he can run them serially?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273991#comment-16273991
 ] 

Mike Drob commented on HBASE-19336:
---

Once you have ruby installed, {{gem install rubocop}} and then you should be 
able to run {{rubocop}} from your shell. QA Bot also provides a link to the 
output of the analysis in the table - for example 
https://builds.apache.org/job/PreCommit-HBASE-Build/10137/artifact/patchprocess/diff-patch-rubocop.txt

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18862) backport HBASE-15109 to branch-1.1,branch-1.2,branch-1.3

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273979#comment-16273979
 ] 

Hadoop QA commented on HBASE-18862:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
52s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
33s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:dca6535 |
| JIRA Issue 

[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273974#comment-16273974
 ] 

xinxin fan commented on HBASE-19336:


I find a rubocop plugin , it maybe helpful.

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273973#comment-16273973
 ] 

Guanghao Zhang commented on HBASE-19336:


bq. would you help to tell me how to run rubocop at local server? 
[~busbey] [~tedyu] [~appy] [~mdrob] Any ideas about this?

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-19336:
---
Comment: was deleted

(was: would you help to tell me how to run rubocop at local server? thanks, 
[~zghaobac])

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273961#comment-16273961
 ] 

xinxin fan commented on HBASE-19336:


would you help to tell me how to run rubocop at local server? thanks, 
[~zghaobac]

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273958#comment-16273958
 ] 

Mike Drob commented on HBASE-17852:
---

When we switch away from restore-via-snapshot and have proper transactions, 
does that mean this extra table will go away?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273955#comment-16273955
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
There were concerns above on cross RS rpc to write the paths, I was trying to 
think of easiest way of avoiding that. How about returning the map as part of 
response here and then issue rpc to master from client side. It's easy and 
safer to retry from client side if remote resource isn't available.
I'd suggest going extra step, an easy one though - collect all paths on client 
side and do single put request. That'll give two benefits:
Will make it transactional incremental backup
If put fails repeatedly, you can either fail bulk load altogether, or throw 
error to user telling that these bulk loaded files failed to backup and that 
only full backup will include them.
{quote}

I will think about this and will get back to you, [~appy] shortly. Thanks, for 
suggestion.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273953#comment-16273953
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 12/1/17 5:22 AM:


{quote}
And to avoid full backup failures from affecting incremental backups (due to 
snapshot restore), you are putting bulk loaded paths data in a separate table, 
right?
{quote}

Yes, you are right.

{quote}
What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?
{quote}

Yes. Any modifications to backup meta table *during* backup create/merge/delete 
session, which *fails* will be lost. It is the limitation currently. As a 
simple workaround, any updates (*backup sets operations only*) to backup meta 
table can be disabled during these sessions. 


was (Author: vrodionov):
{quote}
What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?
{quote}

Yes. Any modifications to backup meta table *during* backup create/merge/delete 
session, which *fails* will be lost. It is the limitation currently. As a 
simple workaround, any updates (*backup sets operations only*) to backup meta 
table can be disabled during these sessions. 

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273953#comment-16273953
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

{quote}
What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?
{quote}

Yes. Any modifications to backup meta table *during* backup create/merge/delete 
session, which *fails* will be lost. It is the limitation currently. As a 
simple workaround, any updates (*backup sets operations only*) to backup meta 
table can be disabled during these sessions. 

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17441) precommit test "hadoopcheck" not properly testing Hadoop 3 profile

2017-11-30 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273933#comment-16273933
 ] 

Nick Dimiduk commented on HBASE-17441:
--

It seems the commit was accurate. Please disregard.

> precommit test "hadoopcheck" not properly testing Hadoop 3 profile
> --
>
> Key: HBASE-17441
> URL: https://issues.apache.org/jira/browse/HBASE-17441
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-17441.0.patch, HBASE-17441.001.branch-2.patch, 
> HBASE-17441.002.branch-2.patch
>
>
> HBASE-14061 made a change that caused building against hadoop 3 to fail, but 
> the hadoopcheck precommit test gave the change a +1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19326) Remove decommissioned servers from rsgroup

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273932#comment-16273932
 ] 

Hudson commented on HBASE-19326:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4147 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4147/])
HBASE-19326 Remove decommissioned servers from rsgroup (stack: rev 
cc3f804b07213f5e60e6ce775d7b4795eada448a)
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdmin.java
* (add) hbase-shell/src/main/ruby/shell/commands/remove_servers_rsgroup.rb
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroups.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminClient.java
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/TestRSGroupsBase.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminEndpoint.java
* (edit) hbase-shell/src/main/ruby/shell.rb
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManager.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupAdminServer.java
* (edit) 
hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupInfoManagerImpl.java
* (edit) hbase-rsgroup/src/main/protobuf/RSGroupAdmin.proto
* (edit) hbase-shell/src/main/ruby/hbase/rsgroup_admin.rb
* (edit) 
hbase-rsgroup/src/test/java/org/apache/hadoop/hbase/rsgroup/VerifyingRSGroupAdminClient.java


> Remove decommissioned servers from rsgroup
> --
>
> Key: HBASE-19326
> URL: https://issues.apache.org/jira/browse/HBASE-19326
> Project: HBase
>  Issue Type: New Feature
>  Components: rsgroup
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 1.4.1, 1.5.0, 2.0.0-beta-1
>
> Attachments: HBASE-19326.master.001.patch, 
> HBASE-19326.master.002.patch, HBASE-19326.master.003.patch, 
> HBASE-19326.master.004.patch, HBASE-19326.master.005.patch
>
>
> In HBASE-18131, we add an hbase shell command {{clear_deadservers}} to clear 
> deadserver list in ServerManager.
> But rsgroup still contains these dead servers, so we should also remove dead 
> servers from the group information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18613) Race condition between master restart and test code when restoring distributed cluster after integration test

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-18613:
-
Fix Version/s: (was: 1.1.13)
   (was: 1.2.7)
   (was: 1.3.2)
   (was: 1.4.0)
   (was: 2.0.0)

Removing fixVersions from ticked closed as invalid.

> Race condition between master restart and test code when restoring 
> distributed cluster after integration test
> -
>
> Key: HBASE-18613
> URL: https://issues.apache.org/jira/browse/HBASE-18613
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
>
> Noticed the following in some internal testing (line numbers likely are 
> skewed)
> {noformat}
> 2017-08-16 21:20:25,557| 2017-08-16 21:20:25,553 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master connection
> 2017-08-16 21:20:25,557| com.google.protobuf.ServiceException: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to 
> master1.domain.com/10.0.2.131:16000 failed on local exception: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to 
> master1.domain.com/10.0.2.131:16000 is closing. Call id=581, waitTime=1
> 2017-08-16 21:20:25,557| at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
> 2017-08-16 21:20:25,558| at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> 2017-08-16 21:20:25,560| at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:62739)
> 2017-08-16 21:20:25,560| at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1448)
> 2017-08-16 21:20:25,561| at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManag
> er.java:2124)
> 2017-08-16 21:20:25,561| at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1712)
> 2017-08-16 21:20:25,562| at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getMaster(ConnectionManager.java:1701)
> 2017-08-16 21:20:25,562| at 
> org.apache.hadoop.hbase.DistributedHBaseCluster.getMasterAdminService(DistributedHBaseCluster.java:153)
> 2017-08-16 21:20:25,563| at 
> org.apache.hadoop.hbase.DistributedHBaseCluster.waitForActiveAndReadyMaster(DistributedHBaseCluster.java:184)
> 2017-08-16 21:20:25,563| at 
> org.apache.hadoop.hbase.HBaseCluster.waitForActiveAndReadyMaster(HBaseCluster.java:204)
> 2017-08-16 21:20:25,563| at 
> org.apache.hadoop.hbase.DistributedHBaseCluster.restoreMasters(DistributedHBaseCluster.java:278)
> 2017-08-16 21:20:25,563| at 
> org.apache.hadoop.hbase.DistributedHBaseCluster.restoreClusterStatus(DistributedHBaseCluster.java:239)
> 2017-08-16 21:20:25,563| at 
> org.apache.hadoop.hbase.HBaseCluster.restoreInitialStatus(HBaseCluster.java:235)
> 2017-08-16 21:20:25,564| at 
> org.apache.hadoop.hbase.IntegrationTestingUtility.restoreCluster(IntegrationTestingUtility.java:99)
> 2017-08-16 21:20:25,564| at 
> org.apache.hadoop.hbase.IntegrationTestBase.cleanUpCluster(IntegrationTestBase.java:200)
> 2017-08-16 21:20:25,564| at 
> org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover.cleanUpCluster(IntegrationTestDDLMasterFailover.java:146)
> 2017-08-16 21:20:25,564| at 
> org.apache.hadoop.hbase.IntegrationTestBase.cleanUp(IntegrationTestBase.java:140)
> 2017-08-16 21:20:25,564| at 
> org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:125)
> 2017-08-16 21:20:25,565| at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2017-08-16 21:20:25,565| at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> 2017-08-16 21:20:25,565| at 
> org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover.main(IntegrationTestDDLMasterFailover.java:832)
> 2017-08-16 21:20:25,566| Caused by: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to 
> master1.domain.com/10.0.2.131:16000 failed on local exception: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to 
> master1.domain.com/10.0.2.131:16000 is closing. Call id=581, waitTime=1
> 2017-08-16 21:20:25,566| at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1258)
> 2017-08-16 21:20:25,566| at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1229)
> 2017-08-16 

[jira] [Commented] (HBASE-17441) precommit test "hadoopcheck" not properly testing Hadoop 3 profile

2017-11-30 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273929#comment-16273929
 ] 

Nick Dimiduk commented on HBASE-17441:
--

I find the following in branch-1.1 commit history, mentions this ticket but 
subject and commit messages do not match. Please confirm.

{noformat}
ea67aca * HBASE-17441 Fix invalid quoting around hadoop-3 build in yetus 
personality
{noformat}

> precommit test "hadoopcheck" not properly testing Hadoop 3 profile
> --
>
> Key: HBASE-17441
> URL: https://issues.apache.org/jira/browse/HBASE-17441
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-17441.0.patch, HBASE-17441.001.branch-2.patch, 
> HBASE-17441.002.branch-2.patch
>
>
> HBASE-14061 made a change that caused building against hadoop 3 to fail, but 
> the hadoopcheck precommit test gave the change a +1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19366) Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region scanner to read data

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273927#comment-16273927
 ] 

Hadoop QA commented on HBASE-19366:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
26s{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
32s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 12s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HBASE-19388) Incorrect value is being set for Compaction Pressure in RegionLoadStats object inside HRegion class

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273918#comment-16273918
 ] 

Hudson commented on HBASE-19388:


FAILURE: Integrated in Jenkins build HBase-1.4 #1038 (See 
[https://builds.apache.org/job/HBase-1.4/1038/])
HBASE-19388 - Incorrect value is being set for Compaction Pressure in 
(apurtell: rev f8e6a56e1e231ae55db6a34279f979cc7399bed0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Incorrect value is being set for Compaction Pressure in RegionLoadStats 
> object inside HRegion class
> ---
>
> Key: HBASE-19388
> URL: https://issues.apache.org/jira/browse/HBASE-19388
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Harshal Jain
>Assignee: Harshal Jain
>Priority: Minor
> Fix For: 1.4.0, 1.3.2, 1.2.7, 2.0.0-beta-1
>
> Attachments: master.patch
>
>
> Incorrect value for compaction pressure is being set in RegionLoadStats 
> object in HRegion class. This is happening because of incorrect typecasting 
> of double to int. Logging this JIRA to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19393) HTTP 413 FULL head while accessing HBase UI using SSL.

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273919#comment-16273919
 ] 

Hudson commented on HBASE-19393:


FAILURE: Integrated in Jenkins build HBase-1.4 #1038 (See 
[https://builds.apache.org/job/HBase-1.4/1038/])
HBASE-19393 HTTP 413 FULL head while accessing HBase UI using SSL. (apurtell: 
rev 21eb8ba6dd6c20c5e9d92ae0cec1e15243f2f4ab)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java


> HTTP 413 FULL head while accessing HBase UI using SSL. 
> ---
>
> Key: HBASE-19393
> URL: https://issues.apache.org/jira/browse/HBASE-19393
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.4.0
> Environment: SSL enabled for UI/REST. 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13
>
> Attachments: HBASE-19393-branch-1.patch, HBASE-19393.patch
>
>
> For REST/UI we are using 64Kb header buffer size instead of the jetty default 
> 6kb (?). But it comes that we set it only for _http_ protocol, but not for 
> _https_. So if SSL is enabled it's quite easy to get HTTP 413 error. Not 
> relevant to branch-2 nor master because it's fixed by HBASE-12894



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17885) Backport HBASE-15871 to branch-1

2017-11-30 Thread sunghan.suh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273914#comment-16273914
 ] 

sunghan.suh commented on HBASE-17885:
-

Can I know when this patch is going to be applied to 1.1.x?

> Backport HBASE-15871 to branch-1
> 
>
> Key: HBASE-17885
> URL: https://issues.apache.org/jira/browse/HBASE-17885
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.3.1, 1.2.5, 1.1.8
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.3.2, 1.4.1, 1.2.8
>
>
> Will try to rebase the branch-1 patch at the earliest. Hope the fix versions 
> are correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18955) HBase client queries stale hbase:meta location with half-dead RegionServer

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-18955:
-
Fix Version/s: (was: 1.1.13)

branch-1.1 is EOL. unscheduling.

> HBase client queries stale hbase:meta location with half-dead RegionServer
> --
>
> Key: HBASE-18955
> URL: https://issues.apache.org/jira/browse/HBASE-18955
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.1.12
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Attachments: HBASE-18995.001.branch-1.1.patch
>
>
> Have been investigating a case with [~tedyu] where, when a RegionServer 
> becomes "hung" (for no specific reason -- not the point), the client becomes 
> stuck trying to talk to this RegionServer, never exiting. This was eventually 
> tracked down to HBASE-15645. However, in testing the fix, I found that there 
> is an additional problem which only affects branch-1.1.
> When the RegionServer in the "half-dead" state is also hosting meta, the 
> hbase client (both the one trying to read data, but also the client in the 
> Master trying to read meta in SSH) get stuck repeatedly trying to read meta 
> from the old location after meta has been reassigned.
> The general test outline goes like this:
> * Start at least 2 regionservers
> * Load some data into a table ({{hbase pe}} is great)
> * Find a region that is hosted by the same RS that is hosting meta
> * {{kill -SIGSTOP}} that RS hosting the user region and meta
> * Issue a {{get}} in the hbase-shell trying to read from that user region
> The expectation is that the ZK lock will expire for the STOP'ed RS, meta will 
> be reassigned, then the user regions will be reassigned, then the client will 
> get the result of the get without seeing an error (as long as this happens 
> within the hbase.client.operation.timeout value, of course).
> We see this happening on HBase 1.2.4 and 1.3.2-SNAPSHOT, but, on 
> 1.1.13-SNAPSHOT, the Master gets up to re-assigning meta, but then gets stuck 
> trying to read meta from the STOP'ed RS instead of where it re-assigned it. 
> Because of this, the regions stay in transition until the master is restarted 
> or the STOP'ed RS is CONT'ed. My best guess is that when the RS sees the 
> {{SIGCONT}}, it immediately begins stopping which is enough to kick the 
> client into refreshing the region location cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273901#comment-16273901
 ] 

Appy edited comment on HBASE-17852 at 12/1/17 4:00 AM:
---

Few questions:
Pardon me if my high level analysis of design is off. Is following correct 
description of current design?
Start bulkload from client -> each RS gets its RPC for prepare and then do the 
actual bulkload --> Internally when bulk load is 
done,BackupObserver#postBulkLoadHFile writes paths to backup table.
And to avoid full backup failures from affecting incremental backups (due to 
snapshot restore), you are putting bulk loaded paths data in a separate table, 
right?


There were concerns above on cross RS rpc to write the paths, I was trying to 
think of easiest way of avoiding that. How about returning the [map as part of 
response 
here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L2251]
 and then issue rpc to master from client side. It's easy and safer to retry 
from client side if remote resource isn't available.
I'd suggest going extra step, an easy one though -  collect all paths on client 
side and do single put request. That'll give two benefits:
- Will make it transactional incremental backup
- If put fails repeatedly, you can either fail bulk load altogether, or throw 
error to user telling that these bulk loaded files failed to backup and that 
only full backup will include them. 


What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?


was (Author: appy):
Few questions:
Pardon me if my high level analysis of design is off. Is following correct 
description of current design?
Start bulkload from client -> each RS gets its RPC for prepare and then do the 
actual bulkload --> Internally when bulk load is 
done,BackupObserver#postBulkLoadHFile writes paths to backup table.
And to avoid full backup failures from affecting incremental backups (due to 
snapshot restore), you are putting bulk loaded paths data in a separate table, 
right?


There were concerns above on cross RS rpc to write the paths, I was trying to 
think of easiest way of avoiding that. How about returning the [map as part of 
response 
here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L2251]
 and then issue rpc to master from client side. It's easy and safer to retry 
from client side if remote resource isn't available.
I'd suggest going extra step, an easy one though -  collect all paths on client 
side and do single put request. That'll give two benefits:
- Will make it transactional incremental backup
- If put fails repeatedly, you can either fail bulk load altogether, or throw 
error to user telling that these bulk loaded files failed to backup and that 
only full backup will include them. 

What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data 

[jira] [Updated] (HBASE-18891) Upgrade netty-all jar

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-18891:
-
Fix Version/s: (was: 1.1.13)

branch-1.1 is EOL. unscheduling.

> Upgrade netty-all jar
> -
>
> Key: HBASE-18891
> URL: https://issues.apache.org/jira/browse/HBASE-18891
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.3.2, 1.4.1, 1.2.8
>
> Attachments: HBASE-18891.001.branch-1.3.patch, 
> HBASE-18891.002.branch-1.3.patch, HBASE-18891.002.branch-1.3.patch
>
>
> Upgrade netty-all jar to 4.0.37.Final version to fix latest vulnerabilities 
> reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19389) Limit concurrency of put with dense (hundreds) columns to prevent write hander exhausted

2017-11-30 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-19389:
--
Summary: Limit concurrency of put with dense (hundreds) columns to prevent 
write hander exhausted  (was: RS's handlers are all busy when writing many 
columns (more than 1000 columns) )

> Limit concurrency of put with dense (hundreds) columns to prevent write 
> hander exhausted
> 
>
> Key: HBASE-19389
> URL: https://issues.apache.org/jira/browse/HBASE-19389
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 2.0.0
> Environment: 2000+ Region Servers
> PCI-E ssd
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0, 3.0.0
>
> Attachments: CSLM-concurrent-write.png, metrics-1.png, ycsb-result.png
>
>
> In a large cluster, with a large number of clients, we found the RS's 
> handlers are all busy sometimes. And after investigation we found the root 
> cause is about CSLM, such as compare function heavy load. We reviewed the 
> related WALs, and found that there were many columns (more than 1000 columns) 
> were writing at that time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17513) Thrift Server 1 uses different QOP settings than RPC and Thrift Server 2 and can easily be misconfigured so there is no encryption when the operator expects it.

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17513:
-
Fix Version/s: (was: 1.1.13)

branch-1.1 is EOL. unscheduling.

> Thrift Server 1 uses different QOP settings than RPC and Thrift Server 2 and 
> can easily be misconfigured so there is no encryption when the operator 
> expects it.
> 
>
> Key: HBASE-17513
> URL: https://issues.apache.org/jira/browse/HBASE-17513
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, security, Thrift, Usability
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 0.98.15, 1.0.3, 1.1.3
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.3.2, 1.4.1, 1.2.8
>
>
> As of HBASE-14400 the setting {{hbase.thrift.security.qop}} was unified to 
> behave the same as the general HBase RPC protection. However, this only 
> happened for the Thrift2 server. The Thrift server found in the thrift 
> package (aka Thrift Server 1) still hard codes the old configs of 'auth', 
> 'auth-int', and 'auth-conf'.
> Additionally, these Quality of Protection (qop) settings are used only by the 
> SASL transport. If a user configures the HBase Thrift Server to make use of 
> the HTTP transport (to enable doAs proxying e.g. for Hue) then a QOP setting 
> of 'privacy' or 'auth-conf' won't get them encryption as expected.
> We should
> 1) update {{hbase-thrift/src/main/.../thrift/ThriftServerRunner}} to rely on 
> {{SaslUtil}} to use the same 'authentication', 'integrity', 'privacy' configs 
> in a backward compatible way
> 2) also have ThriftServerRunner warn when both {{hbase.thrift.security.qop}} 
> and {{hbase.regionserver.thrift.http}} are set, since the latter will cause 
> the former to be ignored. (users should be directed to 
> {{hbase.thrift.ssl.enabled}} and related configs to ensure their transport is 
> encrypted when using the HTTP transport.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273901#comment-16273901
 ] 

Appy edited comment on HBASE-17852 at 12/1/17 3:59 AM:
---

Few questions:
Pardon me if my high level analysis of design is off. Is following correct 
description of current design?
Start bulkload from client -> each RS gets its RPC for prepare and then do the 
actual bulkload --> Internally when bulk load is 
done,BackupObserver#postBulkLoadHFile writes paths to backup table.
And to avoid full backup failures from affecting incremental backups (due to 
snapshot restore), you are putting bulk loaded paths data in a separate table, 
right?


There were concerns above on cross RS rpc to write the paths, I was trying to 
think of easiest way of avoiding that. How about returning the [map as part of 
response 
here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L2251]
 and then issue rpc to master from client side. It's easy and safer to retry 
from client side if remote resource isn't available.
I'd suggest going extra step, an easy one though -  collect all paths on client 
side and do single put request. That'll give two benefits:
- Will make it transactional incremental backup
- If put fails repeatedly, you can either fail bulk load altogether, or throw 
error to user telling that these bulk loaded files failed to backup and that 
only full backup will include them. 

What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?


was (Author: appy):
Few questions:
Pardon me if my high level analysis of design is off. Is following correct 
description of current design?
Start bulkload from client -> each RS gets its RPC for prepare and then do the 
actual bulkload --> Internally when bulk load is 
done,BackupObserver#postBulkLoadHFile writes paths to backup table.
And to avoid full backup failures from affecting incremental backups (due to 
snapshot restore), you are putting bulk loaded paths data in a separate table, 
right?

There were concerns above on cross RS rpc to write the paths, I was trying to 
think of easiest way of avoiding that. How about returning the [map as part of 
response 
here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L2251]
 and then issue rpc to master from client side. It's easy and safer to retry 
from client side if remote resource isn't available.
I'd suggest going extra step, an easy one though -  collect all paths on client 
side and do single put request. That'll give two benefits:
- Will make it transactional incremental backup
- If put fails repeatedly, you can either fail bulk load altogether, or throw 
error to user telling that these bulk loaded files failed to backup and that 
only full backup will include them. 

What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data 

[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273901#comment-16273901
 ] 

Appy commented on HBASE-17852:
--

Few questions:
Pardon me if my high level analysis of design is off. Is following correct 
description of current design?
Start bulkload from client -> each RS gets its RPC for prepare and then do the 
actual bulkload --> Internally when bulk load is 
done,BackupObserver#postBulkLoadHFile writes paths to backup table.
And to avoid full backup failures from affecting incremental backups (due to 
snapshot restore), you are putting bulk loaded paths data in a separate table, 
right?

There were concerns above on cross RS rpc to write the paths, I was trying to 
think of easiest way of avoiding that. How about returning the [map as part of 
response 
here|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L2251]
 and then issue rpc to master from client side. It's easy and safer to retry 
from client side if remote resource isn't available.
I'd suggest going extra step, an easy one though -  collect all paths on client 
side and do single put request. That'll give two benefits:
- Will make it transactional incremental backup
- If put fails repeatedly, you can either fail bulk load altogether, or throw 
error to user telling that these bulk loaded files failed to backup and that 
only full backup will include them. 

What happens if during an ongoing backup, i create some backup sets, but then 
the backup fails? Snapshot restore will remove my backup sets?

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18862) backport HBASE-15109 to branch-1.1,branch-1.2,branch-1.3

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-18862:
-
Fix Version/s: (was: 1.1.13)

branch-1.1 is EOL. unscheduling.

> backport HBASE-15109 to branch-1.1,branch-1.2,branch-1.3
> 
>
> Key: HBASE-18862
> URL: https://issues.apache.org/jira/browse/HBASE-18862
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6, 1.1.12
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Critical
> Fix For: 1.3.2, 1.4.1, 1.2.8
>
> Attachments: HBASE-18862-branch-1.1-v1.patch, 
> HBASE-18862-branch-1.1.patch, HBASE-18862-branch-1.2-v1.patch, 
> HBASE-18862-branch-1.2.patch, HBASE-18862-branch-1.3-v1.patch, 
> HBASE-18862-branch-1.3.patch, HBASE-18862-branch-1.patch
>
>
> HBASE-15109 should apply to  branch-1.1,branch-1.2,branch-1.3 also.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-14610) IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14610:
-
Fix Version/s: (was: 1.1.13)

> IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client
> --
>
> Key: HBASE-14610
> URL: https://issues.apache.org/jira/browse/HBASE-14610
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.4.1, 1.5.0, 1.2.8
>
> Attachments: output
>
>
> HBASE-14535 introduces an IT to simulate a running cluster with RPC servers 
> and RPC clients doing requests against the servers. 
> It passes with the sync client, but fails with async client. Probably we need 
> to take a look. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-14391) Empty regionserver WAL will never be deleted although the coresponding regionserver has been stale

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14391:
-
Fix Version/s: (was: 1.1.13)

> Empty regionserver WAL will never be deleted although the coresponding 
> regionserver has been stale
> --
>
> Key: HBASE-14391
> URL: https://issues.apache.org/jira/browse/HBASE-14391
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.2
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.4.1, 1.5.0, 1.2.8
>
> Attachments: HBASE-14391-master-v3.patch, 
> HBASE_14391_master_v4.patch, HBASE_14391_trunk_v1.patch, 
> HBASE_14391_trunk_v2.patch, WALs-leftover-dir.txt
>
>
> When I restarted the hbase cluster in which there was few data, I found there 
> are two directories for one host with different timestamp which indicates 
> that the old regionserver wal directory is not deleted.
> FHLog#989
> {code}
>  @Override
>   public void close() throws IOException {
> shutdown();
> final FileStatus[] files = getFiles();
> if (null != files && 0 != files.length) {
>   for (FileStatus file : files) {
> Path p = getWALArchivePath(this.fullPathArchiveDir, file.getPath());
> // Tell our listeners that a log is going to be archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.preLogArchive(file.getPath(), p);
>   }
> }
> if (!FSUtils.renameAndSetModifyTime(fs, file.getPath(), p)) {
>   throw new IOException("Unable to rename " + file.getPath() + " to " 
> + p);
> }
> // Tell our listeners that a log was archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.postLogArchive(file.getPath(), p);
>   }
> }
>   }
>   LOG.debug("Moved " + files.length + " WAL file(s) to " +
> FSUtils.getPath(this.fullPathArchiveDir));
> }
> LOG.info("Closed WAL: " + toString());
>   }
> {code}
> When regionserver is stopped, the hlog will be archived, so wal/regionserver 
> is empty in hdfs.
> MasterFileSystem#252
> {code}
> if (curLogFiles == null || curLogFiles.length == 0) {
> // Empty log folder. No recovery needed
> continue;
>   }
> {code}
> The regionserver directory will be not splitted, it makes sense. But it will 
> be not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17885) Backport HBASE-15871 to branch-1

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-17885:
-
Fix Version/s: (was: 1.1.13)

> Backport HBASE-15871 to branch-1
> 
>
> Key: HBASE-17885
> URL: https://issues.apache.org/jira/browse/HBASE-17885
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.3.1, 1.2.5, 1.1.8
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.3.2, 1.4.1, 1.2.8
>
>
> Will try to rebase the branch-1 patch at the earliest. Hope the fix versions 
> are correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18648) Update release checksum generation instructions

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-18648:
-
Fix Version/s: (was: 1.1.13)

> Update release checksum generation instructions
> ---
>
> Key: HBASE-18648
> URL: https://issues.apache.org/jira/browse/HBASE-18648
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.4.1, 1.5.0, 1.2.8
>
>
> [Apache policy on release 
> checksums|http://www.apache.org/dev/release-distribution#sigs-and-sums] has 
> been updated. Adapt our existing documentation and {{make_rc.sh}} script to 
> conform to the new guidelines.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2017-11-30 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14223:
-
Fix Version/s: (was: 1.1.13)

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.4.1, 1.5.0, 1.2.8
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch, 
> hbase-14223_v3-master.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper 
> as os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285
> ...
> 2015-06-05 03:15:17,302 INFO  
> [RS_CLOSE_META-os-enis-dal-test-jun-4-5:16020-0] regionserver.HRegion: Closed 
> hbase:meta,,1.1588230740
> {code}
> In between, a WAL is created: 
> {code}
> 2015-06-05 03:15:11,707 INFO  
> [RS_OPEN_META-os-enis-dal-test-jun-4-5:16020-0-MetaLogRoller] wal.FSHLog: 
> Rolled WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
>  with entries=385, filesize=196.88 KB; new WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> When CM killed the region server later master did not see these WAL files: 
> {code}
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:46,075 
> INFO  [MASTER_SERVER_OPERATIONS-os-enis-dal-test-jun-4-3:16000-0] 
> master.SplitLogManager: started splitting 2 logs in 
> [hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting]
>  for [os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285]
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:47,300 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
>  to 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/oldWALs/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:50,497 
> INFO  [main-EventThread] wal.WALSplitter: Archived 

[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273897#comment-16273897
 ] 

zhaoyuan commented on HBASE-19340:
--

Let me checkout the failed test cases

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19393) HTTP 413 FULL head while accessing HBase UI using SSL.

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273886#comment-16273886
 ] 

Hudson commented on HBASE-19393:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #2030 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/2030/])
HBASE-19393 HTTP 413 FULL head while accessing HBase UI using SSL. (apurtell: 
rev 926021447f033c6211c8201ca8309dcc2c2f3c54)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java


> HTTP 413 FULL head while accessing HBase UI using SSL. 
> ---
>
> Key: HBASE-19393
> URL: https://issues.apache.org/jira/browse/HBASE-19393
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.4.0
> Environment: SSL enabled for UI/REST. 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13
>
> Attachments: HBASE-19393-branch-1.patch, HBASE-19393.patch
>
>
> For REST/UI we are using 64Kb header buffer size instead of the jetty default 
> 6kb (?). But it comes that we set it only for _http_ protocol, but not for 
> _https_. So if SSL is enabled it's quite easy to get HTTP 413 error. Not 
> relevant to branch-2 nor master because it's fixed by HBASE-12894



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19389) RS's handlers are all busy when writing many columns (more than 1000 columns)

2017-11-30 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-19389:
--
   Priority: Major  (was: Minor)
Component/s: (was: hbase)
 Performance

This is something we found and resolved before Singles' Day and now running in 
production, JFYI. Please let us know if any comments on the design of the patch 
or any better idea. Thanks.

[~chancelq] please wait another day for comments on design, then upload the 
patch to [review-board|https://reviews.apache.org/dashboard/] for easier 
review. Thanks.

> RS's handlers are all busy when writing many columns (more than 1000 columns) 
> --
>
> Key: HBASE-19389
> URL: https://issues.apache.org/jira/browse/HBASE-19389
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 2.0.0
> Environment: 2000+ Region Servers
> PCI-E ssd
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0, 3.0.0
>
> Attachments: CSLM-concurrent-write.png, metrics-1.png, ycsb-result.png
>
>
> In a large cluster, with a large number of clients, we found the RS's 
> handlers are all busy sometimes. And after investigation we found the root 
> cause is about CSLM, such as compare function heavy load. We reviewed the 
> related WALs, and found that there were many columns (more than 1000 columns) 
> were writing at that time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273881#comment-16273881
 ] 

Hadoop QA commented on HBASE-19340:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m  
8s{color} | {color:red} The patch generated 17 new + 904 unchanged - 9 fixed = 
921 total (was 913) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 11 new + 794 unchanged - 1 fixed = 
805 total (was 795) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Updated] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19344:
--
Affects Version/s: (was: 2.0.0-beta-1)
Fix Version/s: 3.0.0

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Chance Li
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-beta-1
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> HBASE-19344-v1.patch, HBASE-19344.patch, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19344:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 2.0.0)
   2.0.0-beta-1
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks all for reviewing.

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Duo Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> HBASE-19344-v1.patch, HBASE-19344.patch, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19258) IntegrationTest for Backup and Restore

2017-11-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273869#comment-16273869
 ] 

Vladimir Rodionov commented on HBASE-19258:
---

Yes, working on it now.

> IntegrationTest for Backup and Restore
> --
>
> Key: HBASE-19258
> URL: https://issues.apache.org/jira/browse/HBASE-19258
> Project: HBase
>  Issue Type: Test
>  Components: integration tests
>Reporter: Josh Elser
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> See chatter at https://docs.google.com/document/d/1xbPlLKjOcPq2LDqjbSkF6uND
> AG0mzgOxek6P3POLeMc/edit?usp=sharing
> We need to get an IntegrationTest in place for backup and restore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19393) HTTP 413 FULL head while accessing HBase UI using SSL.

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273866#comment-16273866
 ] 

Hadoop QA commented on HBASE-19393:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
23s{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-1 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 57s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273856#comment-16273856
 ] 

Mike Drob commented on HBASE-17852:
---

Hmm... I don't think we can publish backup/restore without HBASE-16391 in a 2.0 
release. I'd like to have confidence that the feature is rock solid before 
telling users that it's ok to use, parallel operations seems like a major 
shortcoming to me.

Maybe this isn't the right JIRA to discuss this, apologies for stepping into 
the crossfire here. I left a few comments on the RB, will continue to look 
after reading more of the general design.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19392) TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master

2017-11-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273857#comment-16273857
 ] 

Ted Yu commented on HBASE-19392:


lgtm

> TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master
> -
>
> Key: HBASE-19392
> URL: https://issues.apache.org/jira/browse/HBASE-19392
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.0.0-alpha-4
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-19392-master-v001.patch
>
>
> Please see the flakey test list.
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html
> client.TestReplicaWithCluster 96.7% (29 / 30) 29 / 0 / 0  
> show/hide



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: HBASE-19340-branch-1.2.batch

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Status: Patch Available  (was: Open)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340-branch-1.2.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Status: Open  (was: Patch Available)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: (was: HBASE-19340.batch)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19392) TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273849#comment-16273849
 ] 

Hadoop QA commented on HBASE-19392:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
53s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
51s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
52m 43s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 
45s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19392 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900114/HBASE-19392-master-v001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3e7788f80bca 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c64546aa31 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10162/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10162/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master
> -
>
>   

[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273848#comment-16273848
 ] 

xinxin fan commented on HBASE-19336:


I see that, i will fix the warnings later. :)

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19258) IntegrationTest for Backup and Restore

2017-11-30 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273846#comment-16273846
 ] 

Mike Drob commented on HBASE-19258:
---

[~vrodionov] - is this still happening?

> IntegrationTest for Backup and Restore
> --
>
> Key: HBASE-19258
> URL: https://issues.apache.org/jira/browse/HBASE-19258
> Project: HBase
>  Issue Type: Test
>  Components: integration tests
>Reporter: Josh Elser
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
>
> See chatter at https://docs.google.com/document/d/1xbPlLKjOcPq2LDqjbSkF6uND
> AG0mzgOxek6P3POLeMc/edit?usp=sharing
> We need to get an IntegrationTest in place for backup and restore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273847#comment-16273847
 ] 

Hadoop QA commented on HBASE-19340:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m  
5s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HBASE-19340 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:e77c578 |
| JIRA Issue | HBASE-19340 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900132/HBASE-19340.batch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10166/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273844#comment-16273844
 ] 

Hadoop QA commented on HBASE-18233:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
40s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
45s{color} | {color:red} hbase-server in branch-1.4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} hbase-server: The patch generated 0 new + 333 
unchanged - 2 fixed = 333 total (was 335) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
41m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 18s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
|   | hadoop.hbase.client.TestHTableMultiplexerFlushCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HBASE-19092) Make Tag IA.LimitedPrivate and expose for CPs

2017-11-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273842#comment-16273842
 ] 

Ted Yu commented on HBASE-19092:


Ram:
Can you update the release notes ?

There is some typo, e.g. RegionCoprocessorEnvironment

Thanks

> Make Tag IA.LimitedPrivate and expose for CPs
> -
>
> Key: HBASE-19092
> URL: https://issues.apache.org/jira/browse/HBASE-19092
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-beta-1
>
> Attachments: HBASE-19092-branch-2.patch, 
> HBASE-19092-branch-2_5.patch, HBASE-19092-branch-2_5.patch, 
> HBASE-19092.branch-2.0.02.patch, HBASE-19092_001-branch-2.patch, 
> HBASE-19092_001.patch, HBASE-19092_002-branch-2.patch, HBASE-19092_002.patch, 
> HBASE-19092_004.patch, HBASE-19092_005.patch, HBASE-19092_005_branch_2.patch, 
> HBASE-19092_3.patch, HBASE-19092_4.patch
>
>
> We need to make tags as LimitedPrivate as some use cases are trying to use 
> tags like timeline server. The same topic was discussed in dev@ and also in 
> HBASE-18995.
> Shall we target this for beta1 - cc [~saint@gmail.com].
> So once we do this all related Util methods and APIs should also move to 
> LimitedPrivate Util classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Status: Patch Available  (was: Open)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Status: Open  (was: Patch Available)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-30 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273836#comment-16273836
 ] 

Rajeshbabu Chintaguntla commented on HBASE-19384:
-

[~elserj]
bq. Rajeshbabu Chintaguntla any chance you could share the specifics on how the 
coprocessors are set up in which you see this? Or, if you really want to go the 
extra mile, distill it down to a generic test case?
Sure will write test case for this.
[~stack]
bq. So you have multiple coprocessors stacked on a region. One intercepts 
preAppend (or preIncrement) to return its own result instead. Are you saying 
that this result is overwritten by the null that subsequent coprocessors return?
Correct. 
bq. Is the problem our removal of 'complete'? i.e. HBASE-19123 Purge 'complete' 
support from Coprocesor Observers ? Thanks.
Yes it's because of removal of complete so we are not able to skip running 
subsequent coprocessors which do not have any implementation for preAppend or 
preIncrement hooks.

> Results returned by preAppend hook in a coprocessor are replaced with null 
> from other coprocessor even on bypass
> 
>
> Key: HBASE-19384
> URL: https://issues.apache.org/jira/browse/HBASE-19384
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
>
> Phoenix adding multiple coprocessors for a table and one of them has 
> preAppend and preIncrement implementation and bypass the operations by 
> returning the results. But the other coprocessors which doesn't have any 
> implementation returning null and the results returned by previous 
> coprocessor are override by null and always going with default implementation 
> of append and increment operations. But it's not the case with old versions 
> and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-30 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273835#comment-16273835
 ] 

Guanghao Zhang commented on HBASE-19336:


[~xinxin fan] Seems the rubcop result still have warnings which was introduced 
by the patch...

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: (was: HBASE-19340.branch-1.2.v0.batch)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: HBASE-19340.batch

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: (was: HBASE-19340.branch-1.v0.batch)

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.branch-1.2.v0.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18626) Handle the incompatible change about the replication TableCFs' config

2017-11-30 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273827#comment-16273827
 ] 

Guanghao Zhang commented on HBASE-18626:


Ping [~apurtell] for reviewing.

> Handle the incompatible change about the replication TableCFs' config
> -
>
> Key: HBASE-18626
> URL: https://issues.apache.org/jira/browse/HBASE-18626
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-18626.branch-1.001.patch
>
>
> About compatibility, there is one incompatible change about the replication 
> TableCFs' config. The old config is a string and it concatenate the list of 
> tables and column families in format "table1:cf1,cf2;table2:cfA,cfB" in 
> zookeeper for table-cf to replication peer mapping. When parse the config, it 
> use ":" to split the string. If table name includes namespace, it will be 
> wrong (See HBASE-11386). It is a problem since we support namespace (0.98). 
> So HBASE-11393 (and HBASE-16653) changed it to a PB object. When rolling 
> update cluster, you need rolling master first. And the master will try to 
> translate the string config to a PB object. But there are two problems.
> 1. Permission problem. The replication client can write the zookeeper 
> directly. So the znode may have different owner. And master may don't have 
> the write permission for the znode. It maybe failed to translate old 
> table-cfs string to new PB Object. See HBASE-16938
> 2. We usually keep compatibility between old client and new server. But the 
> old replication client may write a string config to znode directly. Then the 
> new server can't parse them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyuan updated HBASE-19340:
-
Attachment: HBASE-19340.branch-1.2.v0.batch

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.branch-1.2.v0.batch, 
> HBASE-19340.branch-1.v0.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19366) Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region scanner to read data

2017-11-30 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19366:
---
Attachment: HBASE-19366.branch-1.001.patch

Retry.

> Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region 
> scanner to read data
> --
>
> Key: HBASE-19366
> URL: https://issues.apache.org/jira/browse/HBASE-19366
> Project: HBase
>  Issue Type: Sub-task
>  Components: metrics
>Reporter: stack
>Assignee: Guanghao Zhang
> Fix For: 1.4.1, 1.5.0
>
> Attachments: HBASE-19035.branch-1.2.001.patch, 
> HBASE-19366.branch-1.001.patch, HBASE-19366.branch-1.001.patch, 
> HBASE-19366.branch-1.001.patch, HBASE-19366.branch-1.3.001.patch
>
>
> Making subissue to backport the parent issue to branch-1. I'll attach first 
> attempt at a backport. It is failing in TestRegionServerMetrics in an assert.
> Making a new issue because time has elapsed since parent went into master and 
> branch-1 and I want to resolve the parent. Thanks. FYI [~zghaobac] If you've 
> input, just say sir and I can take another look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell

2017-11-30 Thread zhaoyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273826#comment-16273826
 ] 

zhaoyuan commented on HBASE-19340:
--

May be I should rename the patch

> SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
> ---
>
> Key: HBASE-19340
> URL: https://issues.apache.org/jira/browse/HBASE-19340
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: zhaoyuan
>Assignee: zhaoyuan
> Fix For: 1.2.8
>
> Attachments: HBASE-19340.branch-1.v0.batch
>
>
> Recently I wanna try to alter the split policy for a table on my cluster 
> which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute 
> of the HTable so I run the command in hbase shell console below. 
> alter 'tablex',SPLIT_POLICY => 
> 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'
> However, It gave the information like this and I confused 
> Unknown argument ignored: SPLIT_POLICY
> Updating all regions with the new schema...
> So I check the source code That admin.rb might miss the setting for this 
> argument .
> htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if 
> arg[MAX_FILESIZE]
> htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY]
> ...
> So I think it may be a bug  ,is it?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19285) Add per-table latency histograms

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273821#comment-16273821
 ] 

Hudson commented on HBASE-19285:


FAILURE: Integrated in Jenkins build HBase-1.4 #1037 (See 
[https://builds.apache.org/job/HBase-1.4/1037/])
HBASE-19285 Implements table-level latency histograms (elserj: rev 
c1a1c97e842a6f1fcedbc51f315be01ca9150953)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (add) 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
* (edit) 
hbase-hadoop-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelper.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
* (add) 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatencies.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegionServer.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsTableLatencies.java
* (edit) 
hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/test/MetricsAssertHelperImpl.java
* (add) 
hbase-hadoop2-compat/src/main/resources/META-INF/services/org.apache.hadoop.hbase.regionserver.MetricsTableLatencies
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerTableMetrics.java


> Add per-table latency histograms
> 
>
> Key: HBASE-19285
> URL: https://issues.apache.org/jira/browse/HBASE-19285
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Clay B.
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.3
>
> Attachments: HBASE-19285.001.branch-1.3.patch, 
> HBASE-19285.002.branch-1.3.patch, HBASE-19285.003.branch-1.3.patch, 
> HBaseTableLatencyMetrics.png
>
>
> HBASE-17017 removed the per-region latency histograms (e.g. Get, Put, Scan at 
> p75, p85, etc)
> HBASE-15518 added some per-table metrics, but not the latency histograms.
> Given the previous conversations, it seems like it these per-table 
> aggregations weren't intentionally omitted, just never re-implemented after 
> the per-region removal. They're some really nice out-of-the-box metrics we 
> can provide to our users/admins as long as it's not detrimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273814#comment-16273814
 ] 

stack commented on HBASE-19384:
---

[~chrajeshbab...@gmail.com] Thanks for filing the issue.

So you have multiple coprocessors stacked on a region. One intercepts preAppend 
(or preIncrement) to return its own result instead. Are you saying that this 
result is overwritten by the null that subsequent coprocessors return?

Is the problem our removal of 'complete'? i.e. HBASE-19123 Purge 'complete' 
support from Coprocesor Observers ?  Thanks.

> Results returned by preAppend hook in a coprocessor are replaced with null 
> from other coprocessor even on bypass
> 
>
> Key: HBASE-19384
> URL: https://issues.apache.org/jira/browse/HBASE-19384
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
>
> Phoenix adding multiple coprocessors for a table and one of them has 
> preAppend and preIncrement implementation and bypass the operations by 
> returning the results. But the other coprocessors which doesn't have any 
> implementation returning null and the results returned by previous 
> coprocessor are override by null and always going with default implementation 
> of append and increment operations. But it's not the case with old versions 
> and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19392) TestReplicaWithCluster#testReplicaGetWithPrimaryAndMetaDown failure in master

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273813#comment-16273813
 ] 

Hadoop QA commented on HBASE-19392:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
30s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
32s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
52m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 18s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas |
|   | hadoop.hbase.client.TestMobSnapshotFromClient |
|   | hadoop.hbase.client.TestSnapshotMetadata |
|   | hadoop.hbase.client.TestSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19392 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900104/HBASE-19163.master.009.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 11447b9b9ced 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c64546aa31 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10160/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10160/testReport/ |
| modules | C: hbase-server U: 

[jira] [Commented] (HBASE-19393) HTTP 413 FULL head while accessing HBase UI using SSL.

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273812#comment-16273812
 ] 

Hudson commented on HBASE-19393:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1030 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1030/])
HBASE-19393 HTTP 413 FULL head while accessing HBase UI using SSL. (apurtell: 
rev 891db9a8ae289ab7d2b2769d8c53a2960c31b4cc)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java


> HTTP 413 FULL head while accessing HBase UI using SSL. 
> ---
>
> Key: HBASE-19393
> URL: https://issues.apache.org/jira/browse/HBASE-19393
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.4.0
> Environment: SSL enabled for UI/REST. 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13
>
> Attachments: HBASE-19393-branch-1.patch, HBASE-19393.patch
>
>
> For REST/UI we are using 64Kb header buffer size instead of the jetty default 
> 6kb (?). But it comes that we set it only for _http_ protocol, but not for 
> _https_. So if SSL is enabled it's quite easy to get HTTP 413 error. Not 
> relevant to branch-2 nor master because it's fixed by HBASE-12894



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19384:
--
Priority: Critical  (was: Major)

> Results returned by preAppend hook in a coprocessor are replaced with null 
> from other coprocessor even on bypass
> 
>
> Key: HBASE-19384
> URL: https://issues.apache.org/jira/browse/HBASE-19384
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
>
> Phoenix adding multiple coprocessors for a table and one of them has 
> preAppend and preIncrement implementation and bypass the operations by 
> returning the results. But the other coprocessors which doesn't have any 
> implementation returning null and the results returned by previous 
> coprocessor are override by null and always going with default implementation 
> of append and increment operations. But it's not the case with old versions 
> and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19384:
--
Fix Version/s: (was: 3.0.0)

> Results returned by preAppend hook in a coprocessor are replaced with null 
> from other coprocessor even on bypass
> 
>
> Key: HBASE-19384
> URL: https://issues.apache.org/jira/browse/HBASE-19384
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0-beta-1
>
>
> Phoenix adding multiple coprocessors for a table and one of them has 
> preAppend and preIncrement implementation and bypass the operations by 
> returning the results. But the other coprocessors which doesn't have any 
> implementation returning null and the results returned by previous 
> coprocessor are override by null and always going with default implementation 
> of append and increment operations. But it's not the case with old versions 
> and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19393) HTTP 413 FULL head while accessing HBase UI using SSL.

2017-11-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273811#comment-16273811
 ] 

Hudson commented on HBASE-19393:


FAILURE: Integrated in Jenkins build HBase-1.1-JDK7 #1945 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1945/])
HBASE-19393 HTTP 413 FULL head while accessing HBase UI using SSL. (apurtell: 
rev 926021447f033c6211c8201ca8309dcc2c2f3c54)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java


> HTTP 413 FULL head while accessing HBase UI using SSL. 
> ---
>
> Key: HBASE-19393
> URL: https://issues.apache.org/jira/browse/HBASE-19393
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.4.0
> Environment: SSL enabled for UI/REST. 
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 1.4.0, 1.3.2, 1.2.7, 1.1.13
>
> Attachments: HBASE-19393-branch-1.patch, HBASE-19393.patch
>
>
> For REST/UI we are using 64Kb header buffer size instead of the jetty default 
> 6kb (?). But it comes that we set it only for _http_ protocol, but not for 
> _https_. So if SSL is enabled it's quite easy to get HTTP 413 error. Not 
> relevant to branch-2 nor master because it's fixed by HBASE-12894



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   >