[jira] [Commented] (HBASE-18954) Make *CoprocessorHost classes private

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203142#comment-16203142
 ] 

Anoop Sam John commented on HBASE-18954:


Region also having this getCoprocessorHost(). Did not remove that as the 
discussion was here in this jira.  We should get rid of getCoprocessorHost() 
from Region and Store.

> Make *CoprocessorHost classes private
> -
>
> Key: HBASE-18954
> URL: https://issues.apache.org/jira/browse/HBASE-18954
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Appy
>Assignee: Appy
>  Labels: incompatible
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18954.master.001.patch, 
> HBASE-18954.master.002.patch
>
>
> Move out configuration name constants (into Coprocessor class?) and made Host 
> classes private.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always false

2017-10-13 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18990:
--
Status: Open  (was: Patch Available)

> ServerLoad doesn't override #equals which leads to #equals in ClusterStatus 
> always false
> 
>
> Key: HBASE-18990
> URL: https://issues.apache.org/jira/browse/HBASE-18990
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-18990.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always false

2017-10-13 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18990:
--
Attachment: HBASE-18990.master.002.patch

Implement hashCode() in {{ServerLoad}}

> ServerLoad doesn't override #equals which leads to #equals in ClusterStatus 
> always false
> 
>
> Key: HBASE-18990
> URL: https://issues.apache.org/jira/browse/HBASE-18990
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-18990.master.001.patch, 
> HBASE-18990.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always false

2017-10-13 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18990:
--
Status: Patch Available  (was: Open)

> ServerLoad doesn't override #equals which leads to #equals in ClusterStatus 
> always false
> 
>
> Key: HBASE-18990
> URL: https://issues.apache.org/jira/browse/HBASE-18990
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-18990.master.001.patch, 
> HBASE-18990.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203150#comment-16203150
 ] 

Anoop Sam John commented on HBASE-18127:


I was thinking abt this of making it generic.  I think may be that will be very 
difficult.  We call the CP hooks from diff layers some times.  From Region or 
some times from RSRpcServices or at Store etc.   The OperationContext object 
has to get passed btw layers then !!! Will be ugly.As of now this patch is 
handling the batch mutation case (And put and delete also as that uses batch 
mutation flow).  We have ObserverContext#getOperationContext() which is CP 
exposed.  In most of the CP hooks, this will return null now!

> Enable state to be passed between the region observer coprocessor hook calls
> 
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-18127.master.001.patch, 
> HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, 
> HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, 
> HBASE-18127.master.005.patch, HBASE-18127.master.005.patch, 
> HBASE-18127.master.006.patch
>
>
> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called.
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203151#comment-16203151
 ] 

Anoop Sam John commented on HBASE-18127:


On making it general, can we follow the approach of getting the callee user 
from the ObserverContext?  U can see the flow.  The rpc user info is set in 
RpcCallContext.  Can we have a CP Conext getter from this?  And whenever the 
ObserverContext instance is been created, we can pass this CP context instance 
obtained from RpcCallContext.   That context will be same as what  u have now , 
the OperationContext  with a Map.  
I feel making the OperationContext  object explicitly in the flow and setting 
it looks bit ugly. Then only we will end up passing that object here and there. 
  I believe with not big effort we can try this generic way.   Am I making it 
clear?

> Enable state to be passed between the region observer coprocessor hook calls
> 
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-18127.master.001.patch, 
> HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, 
> HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, 
> HBASE-18127.master.005.patch, HBASE-18127.master.005.patch, 
> HBASE-18127.master.006.patch
>
>
> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called.
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203151#comment-16203151
 ] 

Anoop Sam John edited comment on HBASE-18127 at 10/13/17 7:29 AM:
--

On making it general, can we follow the approach of getting the callee user 
from the ObserverContext?  U can see the flow.  The rpc user info is set in 
RpcCallContext.  Can we have a CP Conext getter from this?  And whenever the 
ObserverContext instance is been created, we can pass this CP context instance 
obtained from RpcCallContext.   That context will be same as what  u have now , 
the OperationContext  with a Map.  
I feel making the OperationContext  object explicitly in the flow and setting 
it looks bit ugly. Then only we will end up passing that object here and there. 
  I believe with not big effort we can try this generic way.   Am I making it 
clear?
Now itself , whenever we make the ObserverContext instance, we end up reading 
the CurrentCall context which is in a ThreadLocal.  So perf concern wont come 
in as we can have the new context in the same CallContext instance


was (Author: anoop.hbase):
On making it general, can we follow the approach of getting the callee user 
from the ObserverContext?  U can see the flow.  The rpc user info is set in 
RpcCallContext.  Can we have a CP Conext getter from this?  And whenever the 
ObserverContext instance is been created, we can pass this CP context instance 
obtained from RpcCallContext.   That context will be same as what  u have now , 
the OperationContext  with a Map.  
I feel making the OperationContext  object explicitly in the flow and setting 
it looks bit ugly. Then only we will end up passing that object here and there. 
  I believe with not big effort we can try this generic way.   Am I making it 
clear?

> Enable state to be passed between the region observer coprocessor hook calls
> 
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-18127.master.001.patch, 
> HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, 
> HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, 
> HBASE-18127.master.005.patch, HBASE-18127.master.005.patch, 
> HBASE-18127.master.006.patch
>
>
> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called.
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Wang, Xinglong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Xinglong updated HBASE-18602:
---
Attachment: HBASE-18602-master-v3.patch

resubmit patch to trigger jenkins

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch, 
> HBASE-18602-master-v2.patch, HBASE-18602-master-v3.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map> correctAssignments(
>Map> existingAssignments)
>   throws HBaseIOException{
> Map> correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry> assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)
lujie created HBASE-19004:
-

 Summary: master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected 
 Key: HBASE-19004
 URL: https://issues.apache.org/jira/browse/HBASE-19004
 Project: HBase
  Issue Type: Bug
Reporter: lujie


When send stop regionserver command 

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203202#comment-16203202
 ] 

Hadoop QA commented on HBASE-18602:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
32s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892002/HBASE-18602-master-v3.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6b49cd652306 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 883c358 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9093/testReport/ |
| modules | C: hbase-rsgroup U: hbase-rsgroup |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9093/console |
| Powered by | Apache Yetus 0.4.0   http://yetus

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition(eg: if() {} else if() {} else{log.warn()}). I think this 
condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When send stop regionserver command 

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Affects Version/s: 1.2.6

> master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected 
> 
>
> Key: HBASE-19004
> URL: https://issues.apache.org/jira/browse/HBASE-19004
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: lujie
>
> When I send stop regionserver command, A log entry in master make me 
> confused: THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log 
> printed in default condition(eg: if() {} else if() {} else{log.warn()}). I 
> think this condition should be handled
> {code:java}
> 2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
> zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
> ENABLING
> 2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
> Bulk assigning 1 region(s) across 3 server(s), round-robin=true
> 2017-10-13 16:28:28,388 INFO  
> [hadoop11,16000,1507883241250-GeneralBulkAssigner-0] 
> master.AssignmentManager: Assigning 1 region(s) to 
> hadoop11,16020,1507883241942
> 2017-10-13 16:28:28,394 INFO  
> [hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
> Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
> server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
> ts=1507883308394, server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
> Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
> ts=1507883308394, server=hadoop11,16020,1507883241942} to 
> {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
> server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
> Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
> server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
> state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:36,517 INFO  [main-EventThread] 
> zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, 
> processing expiration [hadoop11,16020,1507883241942]
> 2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
> procedure.ServerCrashProcedure: Start processing crashed 
> hadoop11,16020,1507883241942
> 2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> dead splitlog workers [hadoop11,16020,1507883241942]
> 2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
> empty dir, no logs to split
> 2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> Started splitting 0 logs in 
> [hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
> [hadoop11,16020,1507883241942]
> 2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> finished splitting (more than or equal to) 0 bytes in 0 log files in 
> [hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 
> 6ms
> 2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
> SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
> ts=1507883309163, server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
> procedure.ServerCrashProcedure: Finished processing of crashed 
> hadoop11,16020,1507883241942
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Priority: Minor  (was: Major)

> master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected 
> 
>
> Key: HBASE-19004
> URL: https://issues.apache.org/jira/browse/HBASE-19004
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: lujie
>Priority: Minor
>
> When I send stop regionserver command, A log entry in master make me 
> confused: THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log 
> printed in default condition(eg: if() {} else if() {} else{log.warn()}). I 
> think this condition should be handled
> {code:java}
> 2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
> zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
> ENABLING
> 2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
> Bulk assigning 1 region(s) across 3 server(s), round-robin=true
> 2017-10-13 16:28:28,388 INFO  
> [hadoop11,16000,1507883241250-GeneralBulkAssigner-0] 
> master.AssignmentManager: Assigning 1 region(s) to 
> hadoop11,16020,1507883241942
> 2017-10-13 16:28:28,394 INFO  
> [hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
> Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
> server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
> ts=1507883308394, server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
> Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
> ts=1507883308394, server=hadoop11,16020,1507883241942} to 
> {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
> server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
> Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
> server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
> state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:36,517 INFO  [main-EventThread] 
> zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, 
> processing expiration [hadoop11,16020,1507883241942]
> 2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
> procedure.ServerCrashProcedure: Start processing crashed 
> hadoop11,16020,1507883241942
> 2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> dead splitlog workers [hadoop11,16020,1507883241942]
> 2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
> empty dir, no logs to split
> 2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> Started splitting 0 logs in 
> [hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
> [hadoop11,16020,1507883241942]
> 2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
> finished splitting (more than or equal to) 0 bytes in 0 log files in 
> [hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 
> 6ms
> 2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
> SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
> ts=1507883309163, server=hadoop11,16020,1507883241942}
> 2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
> procedure.ServerCrashProcedure: Finished processing of crashed 
> hadoop11,16020,1507883241942
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition(eg: if() {} else if() {} else{log.warn()}).
 I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition(eg: if() {} else if() {} else{log.warn()}). I think this 
condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionS

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition(eg: if() {} else if() {} else {log.warn()}).
 I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition(eg: if() {} else if() {} else{log.warn()}).
 I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
Region

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. 
I check the code,find the log printed in default condition (eg: if() {} 
else if() {} else {log.warn()}).  I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition (eg: if() {} else if() {} else {log.warn()}).  I think 
this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTra

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition (eg: if() {} else if() {} else {log.warn()}).  I think 
this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. I check the code,find the log printed in 
default condition(eg: if() {} else if() {} else {log.warn()}).
 I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. 
I check the code,find the log printed in default condition (eg: 
if() {}
 else if() {}
 else {
log.warn()}).  I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. 
I check the code,find the log printed in default condition (eg: if() {}
 else if() {}
 else {
log.warn()}).  I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServ

[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203280#comment-16203280
 ] 

Hadoop QA commented on HBASE-18233:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
17s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}262m 39s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  3m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}301m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS |
|   | org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer |
|   | org.apache.hadoop.hbase.regionserver.TestCorruptedRegionStoreFile |
|   | org.apache.

[jira] [Updated] (HBASE-19004) master.RegionStates: THIS SHOULD NOT HAPPEN: unexpected

2017-10-13 Thread lujie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lujie updated HBASE-19004:
--
Description: 
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. 
I check the code,find the log printed in default condition (eg: if() {}
 else if() {}
 else {
log.warn()}).  I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerTracker: 
RegionServer ephemeral node deleted, processing expiration 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,428 INFO  [ProcedureExecutor-2] 
procedure.ServerCrashProcedure: Start processing crashed 
hadoop11,16020,1507883241942
2017-10-13 16:28:37,689 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
dead splitlog workers [hadoop11,16020,1507883241942]
2017-10-13 16:28:37,693 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting is 
empty dir, no logs to split
2017-10-13 16:28:37,695 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
Started splitting 0 logs in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] for 
[hadoop11,16020,1507883241942]
2017-10-13 16:28:37,701 INFO  [ProcedureExecutor-4] master.SplitLogManager: 
finished splitting (more than or equal to) 0 bytes in 0 log files in 
[hdfs://hadoop11:29000/hbase/WALs/hadoop11,16020,1507883241942-splitting] in 6ms
2017-10-13 16:28:37,807 WARN  [ProcedureExecutor-4] master.RegionStates: THIS 
SHOULD NOT HAPPEN: unexpected {2aaaf8304f2b09288f528ac0f105cc01 state=OPEN, 
ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:37,923 INFO  [ProcedureExecutor-4] 
procedure.ServerCrashProcedure: Finished processing of crashed 
hadoop11,16020,1507883241942
{code}


  was:
When I send stop regionserver command, A log entry in master make me confused: 
THIS SHOULD NOT HAPPEN: unexpected. 
I check the code,find the log printed in default condition (eg: if() {} 
else if() {} else {log.warn()}).  I think this condition should be handled

{code:java}
2017-10-13 16:28:28,366 INFO  [ProcedureExecutor-1] 
zookeeper.ZKTableStateManager: Moving table TestTable state from null to 
ENABLING
2017-10-13 16:28:28,387 INFO  [ProcedureExecutor-1] master.AssignmentManager: 
Bulk assigning 1 region(s) across 3 server(s), round-robin=true
2017-10-13 16:28:28,388 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.AssignmentManager: 
Assigning 1 region(s) to hadoop11,16020,1507883241942
2017-10-13 16:28:28,394 INFO  
[hadoop11,16000,1507883241250-GeneralBulkAssigner-0] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OFFLINE, ts=1507883308388, 
server=null} to {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:28,585 INFO  [AM.ZK.Worker-pool2-t10] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=PENDING_OPEN, 
ts=1507883308394, server=hadoop11,16020,1507883241942} to 
{2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942}
2017-10-13 16:28:29,163 INFO  [AM.ZK.Worker-pool2-t11] master.RegionStates: 
Transition {2aaaf8304f2b09288f528ac0f105cc01 state=OPENING, ts=1507883308585, 
server=hadoop11,16020,1507883241942} to {2aaaf8304f2b09288f528ac0f105cc01 
state=OPEN, ts=1507883309163, server=hadoop11,16020,1507883241942}
2017-10-13 16:28:36,517 INFO  [main-EventThread] zookeeper.RegionServerT

[jira] [Commented] (HBASE-18966) In-memory compaction/merge should update its time range

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203286#comment-16203286
 ] 

Anoop Sam John commented on HBASE-18966:


This patch uses the SYNC or NON_SYNC version of the TRT correctly now. But we 
dont need any change to update the TRT (TR) when the in memory flush/compaction 
is happening? This is for the EAGER type where we may drop some of the Cells.  
In other types, we will just retain all Cells.  Sorry I could not see any 
related code. Or is it like that is already happening? If so pls change the 
jira subject/desc accordingly.

> In-memory compaction/merge should update its time range
> ---
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203300#comment-16203300
 ] 

Anoop Sam John commented on HBASE-18747:


Do we really need to expose the new Filter and the Wrapper to CPs now?  They 
can do this work on their own right? Can use preFlush/preCompact and there we 
pass the actual scanner used while doing the flush/compaction. They can return 
a wrapper impl and first the cells will flow in there and they can do their 
logic.  I dont think we need give this filter. This will only confuse with the 
client side filters. I agree these are just helpers.  Or else can just move 
them to the example module itself. 

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203302#comment-16203302
 ] 

Anoop Sam John commented on HBASE-18747:


This preFlush hook some way same as postFlushScannerOpen . correct?

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18950:
---
Attachment: HBASE-18950.master.002.patch

> Remove Optional parameters in AsyncAdmin interface
> --
>
> Key: HBASE-18950
> URL: https://issues.apache.org/jira/browse/HBASE-18950
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18950.master.001.patch, 
> HBASE-18950.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203302#comment-16203302
 ] 

Anoop Sam John edited comment on HBASE-18747 at 10/13/17 9:36 AM:
--

This preFlush hook some way same as postFlushScannerOpen . correct?  Dont know 
why we did not go the same way of pre/post hooks.  These 2 pre hooks are 
confusing IMO.  First seeing the patch, I though u r doing some thing wrong as 
pre hook is been used for the wrap op.  Later after reading where we call these 
hook, then only realized the diff.  Should be change now to follow same way of 
pre/postFlushScannerOpen?  Same for compaction case also. Just asking. Not 
related to this Jira any way.


was (Author: anoop.hbase):
This preFlush hook some way same as postFlushScannerOpen . correct?

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18966) Use non-sync TimeRangeTracker as a replacement for TimeRange in ImmutableSegment

2017-10-13 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18966:
---
Summary: Use non-sync TimeRangeTracker as a replacement for TimeRange in 
ImmutableSegment  (was: In-memory compaction/merge should update its time range)

> Use non-sync TimeRangeTracker as a replacement for TimeRange in 
> ImmutableSegment
> 
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203308#comment-16203308
 ] 

Duo Zhang commented on HBASE-18747:
---

{quote}
Or else can just move them to the example module itself.
{quote}
To be honest I feel the same way...
But In the past we do let user use filter in compaction, so I'm a little 
nervous when removing all the stuffs...
Anyway, if you feel the same way I think it maybe better to just let users do 
it. We can provide more examples on how to use it correctly.

Thanks. Let me prepare a new patch.

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18966) Use non-sync TimeRangeTracker as a replacement for TimeRange in ImmutableSegment

2017-10-13 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203309#comment-16203309
 ] 

Chia-Ping Tsai commented on HBASE-18966:


bq.  is it like that is already happening?
That is already happening...see 
[Segment#updateMetaInfo|https://github.com/apache/hbase/blob/d35d8376a70a8de63c5d232a46e39657ba739eef/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java#L283].

bq. If so pls change the jira subject/desc accordingly.
done. Thanks for the reminder.


> Use non-sync TimeRangeTracker as a replacement for TimeRange in 
> ImmutableSegment
> 
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18966) Use non-sync TimeRangeTracker as a replacement for TimeRange in ImmutableSegment

2017-10-13 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18966:
---
Description: 
The in-memory compaction/merge do the great job of optimizing the memory layout 
for cells, but they don't update its {{TimeRange}}. It don't cause any bugs 
currently because the {{TimeRange}} is used for store-level ts filter only and 
the default {{TimeRange}} of {{ImmutableSegment}} created by in-memory 
compaction/merge has the maximum ts range.  

The {{TimeRange}} is used to be a snapshot of {{TimeRangeTracker}} before for 
avoiding the sync operation happening in {{TimeRangeTracker}}. HBASE-

  was:The in-memory compaction/merge do the great job of optimizing the memory 
layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
bugs currently because the {{TimeRange}} is used for store-level ts filter only 
and the default {{TimeRange}} of {{ImmutableSegment}} created by in-memory 
compaction/merge has the maximum ts range.  


> Use non-sync TimeRangeTracker as a replacement for TimeRange in 
> ImmutableSegment
> 
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge do the great job of optimizing the memory 
> layout for cells, but they don't update its {{TimeRange}}. It don't cause any 
> bugs currently because the {{TimeRange}} is used for store-level ts filter 
> only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  
> The {{TimeRange}} is used to be a snapshot of {{TimeRangeTracker}} before for 
> avoiding the sync operation happening in {{TimeRangeTracker}}. HBASE-



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16868) Add a replicate_all flag to avoid misuse the namespaces and table-cfs config of replication peer

2017-10-13 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203315#comment-16203315
 ] 

Guanghao Zhang commented on HBASE-16868:


bq. Let's pick this up
Ok. Let me take a rebase for master branch and upload a new patch.

> Add a replicate_all flag to avoid misuse the namespaces and table-cfs config 
> of replication peer
> 
>
> Key: HBASE-16868
> URL: https://issues.apache.org/jira/browse/HBASE-16868
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-16868.master.001.patch, 
> HBASE-16868.master.002.patch, HBASE-16868.master.003.patch
>
>
> First add a new peer by shell cmd.
> {code}
> add_peer '1', CLUSTER_KEY => "server1.cie.com:2181:/hbase".
> {code}
> If we don't set namespaces and table cfs in peer config. It means replicate 
> all tables to the peer cluster.
> Then append a table to the peer config.
> {code}
> append_peer_tableCFs '1', {"table1" => []}
> {code}
> Then this peer will only replicate table1 to the peer cluster. It changes to 
> replicate only one table from replicate all tables in the cluster. It is very 
> easy to misuse in production cluster. So we should avoid appending table to a 
> peer which replicates all table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203316#comment-16203316
 ] 

Duo Zhang commented on HBASE-18747:
---

{quote}
This preFlush hook some way same as postFlushScannerOpen . correct?
{quote}
Yes.

{quote}
Dont know why we did not go the same way of pre/post hooks.
{quote}
I think in the past we want users to create a StoreScanner in the CP hooks and 
in preFlush/preCompact we already have the scanner so we introduce the 
preXXXScannerOpen method.

And there is also a preStoreScannerOpen hook which is a hook on the normal read 
path. You can see NoOpScanPolicyObserver to see the usage.

Thanks.

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203324#comment-16203324
 ] 

Jingcheng Du commented on HBASE-18602:
--

Thanks [~chia7712]. Will take a look at it when having time.
Hi [~suxingfate], would you mind adding a unit test for misplaced region of 
rsgroup? I doubt moving region will throw NPE when hosting server is null in 
RegionPlan. Thanks.

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch, 
> HBASE-18602-master-v2.patch, HBASE-18602-master-v3.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map> correctAssignments(
>Map> existingAssignments)
>   throws HBaseIOException{
> Map> correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry> assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18966) Use non-sync TimeRangeTracker as a replacement for TimeRange in ImmutableSegment

2017-10-13 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18966:
---
Description: 
The in-memory compaction/merge updates only the {{TimeRangeTracker}} when 
creating new {{ImmutableSegment}}, but the time information used to do time 
filter is the {{TimeRange}} rather than {{TimeRangeTracker}}. It don't cause 
any bugs currently because the {{TimeRange}} is used for store-level ts filter 
only and the default {{TimeRange}} of {{ImmutableSegment}} created by in-memory 
compaction/merge has the maximum ts range.  

The {{TimeRange}} was used to be a snapshot of {{TimeRangeTracker}} for 
avoiding the sync operation happening in {{TimeRangeTracker}}. We can use 
non-sync trt introduced by HBASE-18753 to replace the {{TimeRange}}.

  was:
The in-memory compaction/merge do the great job of optimizing the memory layout 
for cells, but they don't update its {{TimeRange}}. It don't cause any bugs 
currently because the {{TimeRange}} is used for store-level ts filter only and 
the default {{TimeRange}} of {{ImmutableSegment}} created by in-memory 
compaction/merge has the maximum ts range.  

The {{TimeRange}} is used to be a snapshot of {{TimeRangeTracker}} before for 
avoiding the sync operation happening in {{TimeRangeTracker}}. HBASE-


> Use non-sync TimeRangeTracker as a replacement for TimeRange in 
> ImmutableSegment
> 
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge updates only the {{TimeRangeTracker}} when 
> creating new {{ImmutableSegment}}, but the time information used to do time 
> filter is the {{TimeRange}} rather than {{TimeRangeTracker}}. It don't cause 
> any bugs currently because the {{TimeRange}} is used for store-level ts 
> filter only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  
> The {{TimeRange}} was used to be a snapshot of {{TimeRangeTracker}} for 
> avoiding the sync operation happening in {{TimeRangeTracker}}. We can use 
> non-sync trt introduced by HBASE-18753 to replace the {{TimeRange}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1620#comment-1620
 ] 

Chia-Ping Tsai commented on HBASE-18602:


The tests related to rsgroup are flaky now. I think this issue should be 
blocked until HBASE-18350 is resolved. 

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch, 
> HBASE-18602-master-v2.patch, HBASE-18602-master-v3.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map> correctAssignments(
>Map> existingAssignments)
>   throws HBaseIOException{
> Map> correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry> assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16868) Add a replicate_all flag to avoid misuse the namespaces and table-cfs config of replication peer

2017-10-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16868:
---
Attachment: HBASE-16868.master.004.patch

> Add a replicate_all flag to avoid misuse the namespaces and table-cfs config 
> of replication peer
> 
>
> Key: HBASE-16868
> URL: https://issues.apache.org/jira/browse/HBASE-16868
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-16868.master.001.patch, 
> HBASE-16868.master.002.patch, HBASE-16868.master.003.patch, 
> HBASE-16868.master.004.patch
>
>
> First add a new peer by shell cmd.
> {code}
> add_peer '1', CLUSTER_KEY => "server1.cie.com:2181:/hbase".
> {code}
> If we don't set namespaces and table cfs in peer config. It means replicate 
> all tables to the peer cluster.
> Then append a table to the peer config.
> {code}
> append_peer_tableCFs '1', {"table1" => []}
> {code}
> Then this peer will only replicate table1 to the peer cluster. It changes to 
> replicate only one table from replicate all tables in the cluster. It is very 
> easy to misuse in production cluster. So we should avoid appending table to a 
> peer which replicates all table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always false

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203343#comment-16203343
 ] 

Hadoop QA commented on HBASE-18990:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
20s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
40m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m  
3s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18990 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891991/HBASE-18990.master.002.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e47dec5b17ce 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 883c358 |
| Default Java |

[jira] [Updated] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18747:
--
Attachment: HBASE-18747-v1.patch

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747-v1.patch, HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203353#comment-16203353
 ] 

Duo Zhang commented on HBASE-18747:
---

Remove CellFilter and InternalScannerWrapper. Add a DelegatingInternalScanner 
in the example module.

Ping [~anoop.hbase].

Thanks.

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747-v1.patch, HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18873) Hide protobufs in GlobalQuotaSettings

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203361#comment-16203361
 ] 

Anoop Sam John commented on HBASE-18873:


So here hiding the proto obj returning functions by adding a new class 
extension.  The CP user can still get all the Quota related info what he would 
ideally see?  I mean if this is a ThrottleSettings they can get the details 
like for which user on which table and what is the throttle etc?  We should 
make sure that they can get all such details or else there is no point in 
passing the object in CP hook.  Approach wise it is ok though that naming of 
new class  is bit odd.

> Hide protobufs in GlobalQuotaSettings
> -
>
> Key: HBASE-18873
> URL: https://issues.apache.org/jira/browse/HBASE-18873
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18873.001.branch-2.patch
>
>
> HBASE-18807 cleaned up direct protobuf use in the Coprocessor APIs for 
> quota-related functions. However, one new POJO introduced to hide these 
> protocol buffers still exposes PBs via some methods.
> We should try to hide those as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18966) Use non-sync TimeRangeTracker as a replacement for TimeRange in ImmutableSegment

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203362#comment-16203362
 ] 

Anoop Sam John commented on HBASE-18966:


+1

> Use non-sync TimeRangeTracker as a replacement for TimeRange in 
> ImmutableSegment
> 
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge updates only the {{TimeRangeTracker}} when 
> creating new {{ImmutableSegment}}, but the time information used to do time 
> filter is the {{TimeRange}} rather than {{TimeRangeTracker}}. It don't cause 
> any bugs currently because the {{TimeRange}} is used for store-level ts 
> filter only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  
> The {{TimeRange}} was used to be a snapshot of {{TimeRangeTracker}} for 
> avoiding the sync operation happening in {{TimeRangeTracker}}. We can use 
> non-sync trt introduced by HBASE-18753 to replace the {{TimeRange}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18747) Introduce new example and helper classes to tell CP users how to do filtering on scanners

2017-10-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203370#comment-16203370
 ] 

Anoop Sam John commented on HBASE-18747:


+1

> Introduce new example and helper classes to tell CP users how to do filtering 
> on scanners
> -
>
> Key: HBASE-18747
> URL: https://issues.apache.org/jira/browse/HBASE-18747
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18747-v1.patch, HBASE-18747.patch
>
>
> Finally we decided that CP users should not have the ability to create 
> {{StoreScanner}} or {{StoreFileScanner}}, so it is impossible for them to 
> filter out some cells when flush or compaction by simply provide a filter 
> when constructing {{StoreScanner}}.
> But I think filtering out some cells is a very important usage for CP users, 
> so we need to provide the ability in another way. Theoretically it can be 
> done with wrapping an {{InternalScanner}}, but I think we need to give an 
> example, or even some helper classes to help CP users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Wang, Xinglong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203375#comment-16203375
 ] 

Wang, Xinglong commented on HBASE-18602:


[~dujin...@gmail.com] there is a UT testMisplacedRegions in TestRSGroups and 
marked ignored by 
[HBASE-18350|https://issues.apache.org/jira/browse/HBASE-18350]. 
[~chia7712] is right. The UT has issue to run now. I am looking into it.

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch, 
> HBASE-18602-master-v2.patch, HBASE-18602-master-v3.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map> correctAssignments(
>Map> existingAssignments)
>   throws HBaseIOException{
> Map> correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry> assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18945) Make a Public interface for CellComparator

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203381#comment-16203381
 ] 

Hadoop QA commented on HBASE-18945:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 61 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  8m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 7s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
40m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hbase-common generated 5 new + 0 unchanged - 0 fixed = 
5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hbase-prefix-tree in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 50s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
48s{color} | {color:gre

[jira] [Commented] (HBASE-18966) Use non-sync TimeRangeTracker as a replacement for TimeRange in ImmutableSegment

2017-10-13 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203386#comment-16203386
 ] 

Chia-Ping Tsai commented on HBASE-18966:


Thanks Anoop. However, the failed tests are related to my patch :( 
Will be back asap

> Use non-sync TimeRangeTracker as a replacement for TimeRange in 
> ImmutableSegment
> 
>
> Key: HBASE-18966
> URL: https://issues.apache.org/jira/browse/HBASE-18966
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18966.v0.patch, HBASE-18966.v1.patch, 
> HBASE-18966.v2.patch, HBASE-18966.v2.patch
>
>
> The in-memory compaction/merge updates only the {{TimeRangeTracker}} when 
> creating new {{ImmutableSegment}}, but the time information used to do time 
> filter is the {{TimeRange}} rather than {{TimeRangeTracker}}. It don't cause 
> any bugs currently because the {{TimeRange}} is used for store-level ts 
> filter only and the default {{TimeRange}} of {{ImmutableSegment}} created by 
> in-memory compaction/merge has the maximum ts range.  
> The {{TimeRange}} was used to be a snapshot of {{TimeRangeTracker}} for 
> avoiding the sync operation happening in {{TimeRangeTracker}}. We can use 
> non-sync trt introduced by HBASE-18753 to replace the {{TimeRange}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-10-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-14247:
---
Attachment: HBASE-14247.master.005.patch

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff, HBASE-14247.master.001.patch, 
> HBASE-14247.master.002.patch, HBASE-14247.master.003.patch, 
> HBASE-14247.master.004.patch, HBASE-14247.master.005.patch
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203409#comment-16203409
 ] 

Jingcheng Du commented on HBASE-18602:
--

bq. Applying your idea, the FavoredStochasticBalancer#testMisplacedRegions pass 
now.
Yes  [~chia7712], this one can pass.
I ran the other two, and they were timed out in my env, where they waited for 
the RS stop in {{stopServersAndWaitUntilProcessed}} and timeout, and the 
assignment is not started yet. Not sure why RS cannot be stopped.

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch, 
> HBASE-18602-master-v2.patch, HBASE-18602-master-v3.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map> correctAssignments(
>Map> existingAssignments)
>   throws HBaseIOException{
> Map> correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry> assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18602) rsgroup cleanup unassign code

2017-10-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203409#comment-16203409
 ] 

Jingcheng Du edited comment on HBASE-18602 at 10/13/17 11:33 AM:
-

bq. Applying your idea, the FavoredStochasticBalancer#testMisplacedRegions pass 
now.
Yes  [~chia7712], this one can pass if removing the unassign operations and add 
hosting server for the misplaced RegionPlans.
I ran the other two, and they were timed out in my env, where they waited for 
the RS stop in {{stopServersAndWaitUntilProcessed}} and timeout, and the 
assignment is not started yet. Not sure why RS cannot be stopped.


was (Author: jingcheng.du):
bq. Applying your idea, the FavoredStochasticBalancer#testMisplacedRegions pass 
now.
Yes  [~chia7712], this one can pass.
I ran the other two, and they were timed out in my env, where they waited for 
the RS stop in {{stopServersAndWaitUntilProcessed}} and timeout, and the 
assignment is not started yet. Not sure why RS cannot be stopped.

> rsgroup cleanup unassign code
> -
>
> Key: HBASE-18602
> URL: https://issues.apache.org/jira/browse/HBASE-18602
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-18602-master-v1.patch, 
> HBASE-18602-master-v2.patch, HBASE-18602-master-v3.patch
>
>
> While walking through rsgroup code, I found that variable misplacedRegions 
> has never been added any element into. This makes the unassign region code is 
> not functional. And according to my test, it is actually unnecessary to do 
> that.
> RSGroupBasedLoadBalancer.java
> {code:java}
> private Map> correctAssignments(
>Map> existingAssignments)
>   throws HBaseIOException{
> Map> correctAssignments = new TreeMap<>();
> List misplacedRegions = new LinkedList<>();
> correctAssignments.put(LoadBalancer.BOGUS_SERVER_NAME, new 
> LinkedList<>());
> for (Map.Entry> assignments : 
> existingAssignments.entrySet()){
>   ServerName sName = assignments.getKey();
>   correctAssignments.put(sName, new LinkedList<>());
>   List regions = assignments.getValue();
>   for (HRegionInfo region : regions) {
> RSGroupInfo info = null;
> try {
>   info = rsGroupInfoManager.getRSGroup(
>   rsGroupInfoManager.getRSGroupOfTable(region.getTable()));
> } catch (IOException exp) {
>   LOG.debug("RSGroup information null for region of table " + 
> region.getTable(),
>   exp);
> }
> if ((info == null) || (!info.containsServer(sName.getAddress( {
>   correctAssignments.get(LoadBalancer.BOGUS_SERVER_NAME).add(region);
> } else {
>   correctAssignments.get(sName).add(region);
> }
>   }
> }
> //TODO bulk unassign?
> //unassign misplaced regions, so that they are assigned to correct groups.
> for(HRegionInfo info: misplacedRegions) {
>   try {
> this.masterServices.getAssignmentManager().unassign(info);
>   } catch (IOException e) {
> throw new HBaseIOException(e);
>   }
> }
> return correctAssignments;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18990) ServerLoad doesn't override #equals which leads to #equals in ClusterStatus always false

2017-10-13 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203421#comment-16203421
 ] 

Reid Chan commented on HBASE-18990:
---

Hi [~chia7712], [~apurtell], any suggestions?

> ServerLoad doesn't override #equals which leads to #equals in ClusterStatus 
> always false
> 
>
> Key: HBASE-18990
> URL: https://issues.apache.org/jira/browse/HBASE-18990
> Project: HBase
>  Issue Type: Bug
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-18990.master.001.patch, 
> HBASE-18990.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18946:
---
Attachment: HBASE-18946.patch

Parking a patch here. One thing is that for CreateTable PRocedure even if we 
create more than 1 table with replica it seems to work. 
But the same for EnableTableProcedure it hangs.
I think overall the problem could be that the ProcedureExecutor's worker 
threads do not get enough cycles to process all the subprocs. How ever parking 
here to see if the test case with CreateTableHandler can work in QA. If that is 
also hanging then this soln is probably not the right one.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18946:
---
Status: Patch Available  (was: Open)

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203454#comment-16203454
 ] 

Hadoop QA commented on HBASE-18946:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m 
18s{color} | {color:red} Docker failed to build yetus/hbase:5d60123. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18946 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892052/HBASE-18946.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-13346) Clean up Filter package for post 1.0 s/KeyValue/Cell/g

2017-10-13 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203460#comment-16203460
 ] 

Zheng Hu commented on HBASE-13346:
--

Do we need to wait util branch HBASE-18410 merged into master branch  before  
we commit this patch into master ?  Otherwise,  there will be many conflicts 
when merging master with branch HBASE-18410, and we need to resolve the 
conflicts,  workload may be the same as the workload of re-preparing a patch 
for this issue ? 

> Clean up Filter package for post 1.0 s/KeyValue/Cell/g
> --
>
> Key: HBASE-13346
> URL: https://issues.apache.org/jira/browse/HBASE-13346
> Project: HBase
>  Issue Type: Bug
>  Components: API, Filters
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Tamas Penzes
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-13346.master.001.patch, 
> HBASE-13346.master.002.patch, HBASE-13346.master.003.patch, 
> HBASE-13346.master.003.patch, HBASE-13346.master.004.patch, 
> HBASE-13346.master.005.patch, HBASE-13346.master.006.patch
>
>
> Since we have a bit of a messy Filter API with KeyValue vs Cell reference 
> mixed up all over the place, I recommend cleaning this up once and for all. 
> There should be no {{KeyValue}} (or {{kv}}, {{kvs}} etc.) in any method or 
> parameter name.
> This includes deprecating and renaming filters too, for example 
> {{FirstKeyOnlyFilter}}, which really should be named {{FirstKeyValueFilter}} 
> as it does _not_ just return the key, but the entire cell. It should be 
> deprecated and renamed to {{FirstCellFilter}} (or {{FirstColumnFilter}} if 
> you prefer).
> In general we should clarify and settle on {{KeyValue}} vs {{Cell}} vs 
> {{Column}} in our naming. The latter two are the only ones going forward with 
> the public API, and are used synonymous. We should carefully check which is 
> better suited (is it really a specific cell, or the newest cell, aka the 
> newest column value) and settle on a naming schema.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18945) Make a Public interface for CellComparator

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18945:
---
Status: Open  (was: Patch Available)

> Make a Public interface for CellComparator
> --
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18495.patch, HBASE-18945_2.patch, 
> HBASE-18945_3.patch, HBASE-18945_4.patch, HBASE-18945_5.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18945) Make a Public interface for CellComparator

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18945:
---
Status: Patch Available  (was: Open)

Fixed some test case issues.

> Make a Public interface for CellComparator
> --
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18495.patch, HBASE-18945_2.patch, 
> HBASE-18945_3.patch, HBASE-18945_4.patch, HBASE-18945_5.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18946:
---
Status: Patch Available  (was: Open)

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18946:
---
Attachment: HBASE-18946.patch

REtry QA.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18946:
---
Status: Open  (was: Patch Available)

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18945) Make a Public interface for CellComparator

2017-10-13 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18945:
---
Attachment: HBASE-18945_5.patch

Retry QA.

> Make a Public interface for CellComparator
> --
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18495.patch, HBASE-18945_2.patch, 
> HBASE-18945_3.patch, HBASE-18945_4.patch, HBASE-18945_5.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203476#comment-16203476
 ] 

Hadoop QA commented on HBASE-18950:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 
36s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18950 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892020/HBASE-18950.master.002.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 4c3cb3917aca 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 883c358 |
| Default Java | 

[jira] [Commented] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls

2017-10-13 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203491#comment-16203491
 ] 

Abhishek Singh Chouhan commented on HBASE-18127:


Yep passing the object here and there and creating across flows is a bit ugly 
:) Creating the OperationContext in RpcCallContext and getting it when 
ObserverContext is created looks to be a better idea. Let me come up with a 
patch that does as you mentioned. Thanks for taking the time and reviewing 
[~anoop.hbase]!!

> Enable state to be passed between the region observer coprocessor hook calls
> 
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-18127.master.001.patch, 
> HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, 
> HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, 
> HBASE-18127.master.005.patch, HBASE-18127.master.005.patch, 
> HBASE-18127.master.006.patch
>
>
> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called.
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-10-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203505#comment-16203505
 ] 

Duo Zhang commented on HBASE-14247:
---

+1.

> Separate the old WALs into different regionserver directories
> -
>
> Key: HBASE-14247
> URL: https://issues.apache.org/jira/browse/HBASE-14247
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Liu Shaohui
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff, HBASE-14247.master.001.patch, 
> HBASE-14247.master.002.patch, HBASE-14247.master.003.patch, 
> HBASE-14247.master.004.patch, HBASE-14247.master.005.patch
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16868) Add a replicate_all flag to avoid misuse the namespaces and table-cfs config of replication peer

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203551#comment-16203551
 ] 

Hadoop QA commented on HBASE-16868:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 34 new + 313 unchanged - 7 fixed = 
347 total (was 320) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
10s{color} | {color:red} The patch generated 33 new + 321 unchanged - 1 fixed = 
354 total (was 322) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
59s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
34m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 42s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {co

[jira] [Updated] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18950:
---
Attachment: HBASE-18950.master.003.patch

Attach a 003 patch addressed review comments.

> Remove Optional parameters in AsyncAdmin interface
> --
>
> Key: HBASE-18950
> URL: https://issues.apache.org/jira/browse/HBASE-18950
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18950.master.001.patch, 
> HBASE-18950.master.002.patch, HBASE-18950.master.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-15410) Utilize the max seek value when all Filters in MUST_PASS_ALL FilterList return SEEK_NEXT_USING_HINT

2017-10-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203578#comment-16203578
 ] 

Ted Yu commented on HBASE-15410:


Ping [~busbey]

> Utilize the max seek value when all Filters in MUST_PASS_ALL FilterList 
> return SEEK_NEXT_USING_HINT
> ---
>
> Key: HBASE-15410
> URL: https://issues.apache.org/jira/browse/HBASE-15410
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: filter, perfomance
> Fix For: 1.5.0, 2.0.0-alpha-3, HBASE-18410
>
> Attachments: 15410-wip.patch, 15410.branch-1.patch, 15410.v1.patch, 
> 15410.v2.patch, 15410.v3.patch
>
>
> As Preston mentioned in the comment in HBASE-15243:
> https://issues.apache.org/jira/browse/HBASE-15243?focusedCommentId=15143557&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15143557
> An optimization for filters returning a SEEK_NEXT_USING_HINT would be to seek 
> to the highest hint (Any previous/lower row won't be accepted by the filter 
> returning that seek).
> This JIRA is to explore this potential optimization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16868) Add a replicate_all flag to avoid misuse the namespaces and table-cfs config of replication peer

2017-10-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-16868:
---
Attachment: HBASE-16868.master.005.patch

> Add a replicate_all flag to avoid misuse the namespaces and table-cfs config 
> of replication peer
> 
>
> Key: HBASE-16868
> URL: https://issues.apache.org/jira/browse/HBASE-16868
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-16868.master.001.patch, 
> HBASE-16868.master.002.patch, HBASE-16868.master.003.patch, 
> HBASE-16868.master.004.patch, HBASE-16868.master.005.patch
>
>
> First add a new peer by shell cmd.
> {code}
> add_peer '1', CLUSTER_KEY => "server1.cie.com:2181:/hbase".
> {code}
> If we don't set namespaces and table cfs in peer config. It means replicate 
> all tables to the peer cluster.
> Then append a table to the peer config.
> {code}
> append_peer_tableCFs '1', {"table1" => []}
> {code}
> Then this peer will only replicate table1 to the peer cluster. It changes to 
> replicate only one table from replicate all tables in the cluster. It is very 
> easy to misuse in production cluster. So we should avoid appending table to a 
> peer which replicates all table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-14247) Separate the old WALs into different regionserver directories

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203606#comment-16203606
 ] 

Hadoop QA commented on HBASE-14247:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 
40s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-14247 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892041/HBASE-14247.master.005.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 06b3448157a5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 883c358 |
| Default Jav

[jira] [Commented] (HBASE-18505) Our build/yetus personality will run tests on individual modules and then on all (i.e. 'root'). Should do one or other

2017-10-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203612#comment-16203612
 ] 

Mike Drob commented on HBASE-18505:
---

[~busbey] - v2 look good to you?

> Our build/yetus personality will run tests on individual modules and then on 
> all (i.e. 'root'). Should do one or other
> --
>
> Key: HBASE-18505
> URL: https://issues.apache.org/jira/browse/HBASE-18505
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: stack
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
> Attachments: HBASE-18505.patch, HBASE-18505.v2.patch
>
>
> In runs on end of HBASE-17056, a patch that touches all modules, [~busbey] 
> noticed that we were doing unit suite twice... Once for each individual 
> module and then again for all/root because patch had root changes in it. We 
> shouldn't do all if we are doing 'root' as per [~busbey]
> Here is tail of console output:
> {code}
> 
> 10:50:30 cd /testptch/hbase/hbase-spark
> 10:50:30 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-hbase-spark.txt 2>&1
> 10:55:35 Elapsed:   5m 14s
> 10:55:45 cd /testptch/hbase
> 10:55:45 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-root.txt 2>&1
> 13:00:13 Build was aborted
> ...
> {code}
> I'd aborted the run because it seemed to be taking too long but on subsequent 
> examination, it was actually making progress.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18912) Update Admin methods to return Lists instead of arrays

2017-10-13 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203634#comment-16203634
 ] 

Guanghao Zhang commented on HBASE-18912:


[~psomogyi] I prefer to use List as return type. Because the List interface 
have many useful methods for user to easy use. We need deprecate the old 
methods and add new methods with new return type. But if we only change the 
return type, java will take them as same methods. So we have to change the 
method name.. Meanwhile we need deprecate many old methods. So if we can't 
find a better method name or don't have a good reason to deprecate the old 
methods, I thought we don't need to only change the return type from array to 
List...

> Update Admin methods to return Lists instead of arrays
> --
>
> Key: HBASE-18912
> URL: https://issues.apache.org/jira/browse/HBASE-18912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18805) Unify Admin and AsyncAdmin

2017-10-13 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203636#comment-16203636
 ] 

Guanghao Zhang commented on HBASE-18805:


[~appy] Great idea. I will add a test in HBASE-18911.

> Unify Admin and AsyncAdmin
> --
>
> Key: HBASE-18805
> URL: https://issues.apache.org/jira/browse/HBASE-18805
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Balazs Meszaros
> Fix For: 2.0.0-beta-1
>
>
> Admin and AsyncAdmin differ some places:
> - some methods missing from AsyncAdmin (e.g. methods with String regex),
> - some methods have different names (listTables vs listTableDescriptors),
> - some method parameters are different (e.g. AsyncAdmin has Optional<> 
> parameters),
> - AsyncAdmin returns Lists instead of arrays (e.g. listTableNames),
> - unify Javadoc comments,
> - ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203655#comment-16203655
 ] 

Hadoop QA commented on HBASE-18946:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
15s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
59s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
47m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 28s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.client.TestScanWithoutFetchingData |
|   | org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay |
|   | org.apache.hadoop.hbase.master.TestMasterMetricsWrapper |
|   | org.apache.hadoop.hbase.master.procedure.TestDisableTableProcedure |
|   | org.apache.hadoop.hbase.regionserver.TestRowTooBig |
|   | org.apache.hadoop.hbase.regionserver.wal.TestAsyncWALReplay |
|   | org.apache.hadoop.hbase.regionserver.TestSplitLogWorker |
|   | org.apache.hadoop.hbase.master.procedure.TestServerCrashProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestModifyTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestDeleteTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestEnableTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestCreateTableProcedure |
|   | org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence |
|   | org.apache.hadoop.hbase.coprocessor.TestHTableWrapper |
|   | org.apache.hadoop.hbase.regionserver.compacti

[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203698#comment-16203698
 ] 

Josh Elser commented on HBASE-18998:


LGTM, thanks for turning around a quick patch for this!

You planning to look at the other branches for similar changes to make after 
applying this one?

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.5.0, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1650)
> {code}
> Here is code from branch-1.1 :
> {code}
> if (!mutations.isEmpty() && !walSyncSuccessful)

[jira] [Commented] (HBASE-18505) Our build/yetus personality will run tests on individual modules and then on all (i.e. 'root'). Should do one or other

2017-10-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203701#comment-16203701
 ] 

Sean Busbey commented on HBASE-18505:
-

+1

> Our build/yetus personality will run tests on individual modules and then on 
> all (i.e. 'root'). Should do one or other
> --
>
> Key: HBASE-18505
> URL: https://issues.apache.org/jira/browse/HBASE-18505
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: stack
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
> Attachments: HBASE-18505.patch, HBASE-18505.v2.patch
>
>
> In runs on end of HBASE-17056, a patch that touches all modules, [~busbey] 
> noticed that we were doing unit suite twice... Once for each individual 
> module and then again for all/root because patch had root changes in it. We 
> shouldn't do all if we are doing 'root' as per [~busbey]
> Here is tail of console output:
> {code}
> 
> 10:50:30 cd /testptch/hbase/hbase-spark
> 10:50:30 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-hbase-spark.txt 2>&1
> 10:55:35 Elapsed:   5m 14s
> 10:55:45 cd /testptch/hbase
> 10:55:45 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-root.txt 2>&1
> 13:00:13 Build was aborted
> ...
> {code}
> I'd aborted the run because it seemed to be taking too long but on subsequent 
> examination, it was actually making progress.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203706#comment-16203706
 ] 

Ted Yu commented on HBASE-18946:


{code}
47public synchronized void addRegion(RegionStateNode node) {
{code}
Better assert that the RegionStateNode's added are replica of the same region.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18873) Hide protobufs in GlobalQuotaSettings

2017-10-13 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203710#comment-16203710
 ] 

Josh Elser commented on HBASE-18873:


bq. Approach wise it is ok though that naming of new class is bit odd.

Suggestions instead? :)

bq. I mean if this is a ThrottleSettings they can get the details like for 
which user on which table and what is the throttle etc?

Let me double check, we might have some API already to convert most of the 
internal protobufs back to some sort of POJOs. We could expose them on the 
visible GlobalQuotaSettings with a caveat that the methods are expensive and 
users shouldn't call them repeatedly.

> Hide protobufs in GlobalQuotaSettings
> -
>
> Key: HBASE-18873
> URL: https://issues.apache.org/jira/browse/HBASE-18873
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18873.001.branch-2.patch
>
>
> HBASE-18807 cleaned up direct protobuf use in the Coprocessor APIs for 
> quota-related functions. However, one new POJO introduced to hide these 
> protocol buffers still exposes PBs via some methods.
> We should try to hide those as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18350) RSGroups are broken under AMv2

2017-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203714#comment-16203714
 ] 

stack commented on HBASE-18350:
---

Looks like TestRSGroups didn't finiish. Other RSgroup tests did though. What 
you reckon [~balazs.meszaros]?

> RSGroups are broken under AMv2
> --
>
> Key: HBASE-18350
> URL: https://issues.apache.org/jira/browse/HBASE-18350
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: Balazs Meszaros
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18350.master.001.patch, 
> HBASE-18350.master.002.patch, HBASE-18350.master.003.patch, 
> HBASE-18350.master.004.patch, HBASE-18350.master.004.patch
>
>
> The following RSGroups tests were disabled by Core Proc-V2 AM in HBASE-14614:
> - Disabled/Ignore TestRSGroupsOfflineMode#testOffline; need to dig in on what 
> offline is.
> - Disabled/Ignore TestRSGroups.
> This JIRA tracks the work to enable them (or remove/modify if not applicable).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18350) RSGroups are broken under AMv2

2017-10-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18350:
--
Attachment: HBASE-18350.master.004.patch

Retry

> RSGroups are broken under AMv2
> --
>
> Key: HBASE-18350
> URL: https://issues.apache.org/jira/browse/HBASE-18350
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: Balazs Meszaros
>Priority: Blocker
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18350.master.001.patch, 
> HBASE-18350.master.002.patch, HBASE-18350.master.003.patch, 
> HBASE-18350.master.004.patch, HBASE-18350.master.004.patch, 
> HBASE-18350.master.004.patch
>
>
> The following RSGroups tests were disabled by Core Proc-V2 AM in HBASE-14614:
> - Disabled/Ignore TestRSGroupsOfflineMode#testOffline; need to dig in on what 
> offline is.
> - Disabled/Ignore TestRSGroups.
> This JIRA tracks the work to enable them (or remove/modify if not applicable).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18601) Update Htrace to 4.2

2017-10-13 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18601:
--
Attachment: HBASE-18601.master.010.patch

> Update Htrace to 4.2
> 
>
> Key: HBASE-18601
> URL: https://issues.apache.org/jira/browse/HBASE-18601
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Tamas Penzes
>Assignee: Tamas Penzes
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18601.master.001.patch, 
> HBASE-18601.master.002.patch, HBASE-18601.master.003 (3).patch, 
> HBASE-18601.master.003.patch, HBASE-18601.master.004.patch, 
> HBASE-18601.master.004.patch, HBASE-18601.master.005.patch, 
> HBASE-18601.master.006.patch, HBASE-18601.master.006.patch, 
> HBASE-18601.master.007.patch, HBASE-18601.master.007.patch, 
> HBASE-18601.master.007.patch, HBASE-18601.master.008.patch, 
> HBASE-18601.master.009.patch, HBASE-18601.master.009.patch, 
> HBASE-18601.master.010.patch, HBASE-18601.master.010.patch
>
>
> HTrace is not perfectly integrated into HBase, the version 3.2.0 is buggy, 
> the upgrade to 4.x is not trivial and would take time. It might not worth to 
> keep it in this state, so would be better to remove it.
> Of course it doesn't mean tracing would be useless, just that in this form 
> the use of HTrace 3.2 might not add any value to the project and fixing it 
> would be far too much effort.
> -
> Based on the decision of the community we keep htrace now and update version



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18998:
---
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.13
   1.2.7
   1.3.2

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1650)
> {code}
> Here is code from branch-1.1 :
> {code}
> if (!mutations.isEmpty() && !walSyncSuccessful) {
>   LOG.warn("Wal sync failed. Roll back " + mutations.size() +
>   " m

[jira] [Updated] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18998:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review, Josh.

> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1650)
> {code}
> Here is code from branch-1.1 :
> {code}
> if (!mutations.isEmpty() && !walSyncSuccessful) {
>   LOG.warn("Wal sync failed. Roll back " + mutations.size() +
>   " mems

[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203726#comment-16203726
 ] 

Hudson commented on HBASE-18998:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #307 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/307/])
HBASE-18998 processor.getRowsToLock() always assumes there is some row (tedyu: 
rev 1d7ca57a49a1b775437e08645b0a978128ee38d0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.p

[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203728#comment-16203728
 ] 

Hudson commented on HBASE-18998:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #322 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/322/])
HBASE-18998 processor.getRowsToLock() always assumes there is some row (tedyu: 
rev 1d7ca57a49a1b775437e08645b0a978128ee38d0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.p

[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203743#comment-16203743
 ] 

Hudson commented on HBASE-18998:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #238 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/238/])
HBASE-18998 processor.getRowsToLock() always assumes there is some row (tedyu: 
rev a68465fba79bfe7fae9fdd72e0b8d07fccefca0e)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.p

[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203744#comment-16203744
 ] 

Hudson commented on HBASE-18998:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #235 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/235/])
HBASE-18998 processor.getRowsToLock() always assumes there is some row (tedyu: 
rev a68465fba79bfe7fae9fdd72e0b8d07fccefca0e)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.p

[jira] [Commented] (HBASE-18945) Make a Public interface for CellComparator

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203747#comment-16203747
 ] 

Hadoop QA commented on HBASE-18945:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 61 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 11m 
12s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green}  2m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hbase-common generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hbase-client generated 2 new + 2 unchanged - 0 fixed = 
4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hbase-prefix-tree in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m  
2s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {col

[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203755#comment-16203755
 ] 

Hudson commented on HBASE-18998:


FAILURE: Integrated in Jenkins build HBase-1.3-IT #236 (See 
[https://builds.apache.org/job/HBase-1.3-IT/236/])
HBASE-18998 processor.getRowsToLock() always assumes there is some row (tedyu: 
rev 1d7ca57a49a1b775437e08645b0a978128ee38d0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoen

[jira] [Commented] (HBASE-18355) Enable export snapshot tests that were disabled by Proc-V2 AM in HBASE-14614

2017-10-13 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203758#comment-16203758
 ] 

huaxiang sun commented on HBASE-18355:
--

Hi [~tedyu] and [~mdrob], is it good to go? The scope is for test only, so it 
will not be in the shaded jar. Thanks.

> Enable export snapshot tests that were disabled by Proc-V2 AM in HBASE-14614
> 
>
> Key: HBASE-18355
> URL: https://issues.apache.org/jira/browse/HBASE-18355
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: huaxiang sun
> Attachments: HBASE-18355-master_v001.patch, 
> HBASE-18355-master_v002.patch
>
>
> The Proc-V2 AM in HBASE-14614 disabled the following tests:
> - Disabled TestExportSnapshot Hangs. 
> - Disabled TestSecureExportSnapshot
> - Disabled TestMobSecureExportSnapshot and TestMobExportSnapshot
> This JIRA tracks the work to enable them.  If MOB requires more work, we 
> could split to 2 tickets.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18355) Enable export snapshot tests that were disabled by Proc-V2 AM in HBASE-14614

2017-10-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203767#comment-16203767
 ] 

Ted Yu commented on HBASE-18355:


Good by me.

> Enable export snapshot tests that were disabled by Proc-V2 AM in HBASE-14614
> 
>
> Key: HBASE-18355
> URL: https://issues.apache.org/jira/browse/HBASE-18355
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: huaxiang sun
> Attachments: HBASE-18355-master_v001.patch, 
> HBASE-18355-master_v002.patch
>
>
> The Proc-V2 AM in HBASE-14614 disabled the following tests:
> - Disabled TestExportSnapshot Hangs. 
> - Disabled TestSecureExportSnapshot
> - Disabled TestMobSecureExportSnapshot and TestMobExportSnapshot
> This JIRA tracks the work to enable them.  If MOB requires more work, we 
> could split to 2 tickets.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18998) processor.getRowsToLock() always assumes there is some row being locked

2017-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203770#comment-16203770
 ] 

Hudson commented on HBASE-18998:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #972 (See 
[https://builds.apache.org/job/HBase-1.2-IT/972/])
HBASE-18998 processor.getRowsToLock() always assumes there is some row (tedyu: 
rev a68465fba79bfe7fae9fdd72e0b8d07fccefca0e)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> processor.getRowsToLock() always assumes there is some row being locked
> ---
>
> Key: HBASE-18998
> URL: https://issues.apache.org/jira/browse/HBASE-18998
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: 18998.v1.txt
>
>
> During testing, we observed the following exception:
> {code}
> 2017-10-12 02:52:26,683|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|1/1  DROP TABLE 
> testTable;
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|17/10/12 02:52:30 WARN 
> ipc.CoprocessorRpcChannel: Call failed on IOException
> 2017-10-12 02:52:30,320|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|org.apache.hadoop.hbase.DoNotRetryIOException:
>  org.apache.hadoop.hbase.DoNotRetryIOException: TESTTABLE: null
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:93)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1671)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14347)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7849)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
> 2017-10-12 02:52:30,321|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|Caused by: 
> java.util.NoSuchElementException
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> java.util.Collections$EmptyIterator.next(Collections.java:4189)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:7137)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:6980)
> 2017-10-12 02:52:30,322|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateRowsWithLocks(MetaDataEndpointImpl.java:1966)
> 2017-10-12 02:52:30,323|INFO|MainThread|machine.py:164 - 
> run()||GUID=f4cd2a25-3040-41cc-b423-9ec7990048f4|at 
> org.apache.phoen

[jira] [Commented] (HBASE-18352) Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614

2017-10-13 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203784#comment-16203784
 ] 

huaxiang sun commented on HBASE-18352:
--

Hi [~vrodionov], morning! I am also looking at this replica failure. Is it ok 
for me to take this over from you? Right now, the real failure is 
testCreateTableWithMultipleReplicas. Thanks.

> Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614
> 
>
> Key: HBASE-18352
> URL: https://issues.apache.org/jira/browse/HBASE-18352
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: Vladimir Rodionov
>
> The following replica tests were disabled by Core Proc-V2 AM in HBASE-14614:
> - Disabled parts of...testCreateTableWithMultipleReplicas in 
> TestMasterOperationsForRegionReplicas There is an issue w/ assigning more 
> replicas if number of replicas is changed on us. See '/* DISABLED! FOR 
> NOW'.
> - Disabled testRegionReplicasOnMidClusterHighReplication in 
> TestStochasticLoadBalancer2
> - Disabled testFlushAndCompactionsInPrimary in TestRegionReplicas
> This JIRA tracks the work to enable them (or modify/remove if not applicable).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203805#comment-16203805
 ] 

Hadoop QA commented on HBASE-18950:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
36s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
45m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18950 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892072/HBASE-18950.master.003.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0d0cdb5d6494 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 883c358 |
| Default Java | 

[jira] [Commented] (HBASE-12260) MasterServices needs a short-back-and-sides; pare-back exposure of internals and IA.Private classes

2017-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203820#comment-16203820
 ] 

stack commented on HBASE-12260:
---

First, thanks for the great review [~appy] up on rb. Looking forward to going 
through it.

On Mocks vs MasterServices vs RegionServerServices vs 
CoprocessRegionServerServices, please see the email on dev list: 
http://apache-hbase.679495.n3.nabble.com/Looking-for-input-on-an-alpha-4-thorny-item-td4090953.html.
 In synopsis, MasterServices and RegionServerServices have been corrupted and, 
among other failings, have become a Coprocessor conduit to our internals that 
needs to be shutdown for hbase2 (more background on this below). In this issue, 
MS is for CPs only (IA.LP). I write this in the class comment. I then refactor 
away any other use of MS putting in place HMaster impl in its stead. Meantime, 
Anoop has been on a similar project on the RS side. There he puts in place a 
new Interface CPRSS for CPs and RSS continues (though below I argue this 
problematic).

Some background. MS and RSS started out as severe subsets of HMaster and 
HRegionServer function -- just enough to satisfy 80% need -- and could be 
plugged in in a mock form making it so we didn't have to bring up full servers 
in tests making them less resource intensive. This was their original intent (I 
think I did this).

Over time, lazily, internally, we put in place the Interfaces in place of the 
implementations starting up Services and Chores. We also allowed CPs access to 
the Interfaces. As we worked on internals, if we needed a facility in an 
awkward corner, no problem, just add the needed function to RSS and MS. As time 
passed, RSS and MS bulked up because of internal needs. Each time we added to 
the Interface, CPs got access such that now, CPs can access all critical 
internals via *Service.

How to proceed? Here, I take the radical route of purging the failed Interface 
project. Anoop over in the RS side, first wanted to add an 'internal' RSS and 
expose a subset to CPs. In review I thought that too ugly 
(HRegionServer+RSS+InternalRSS) and didn't think we'd be able to keep it 
straight going forward (where should I expose this new method?). Now, after 
feedback he has a CPRSS that is dedicated to CPs exclusively. I considered this 
a mess in a fashion similar to the HRS+RSS+IRSS noted above and it did not 
align w/ the radical retrofit done here.

Options for CPs:

 * Give the Interfaces over to CP totally and purge their internal use (as done 
here) echoing what we did to the Region+Store Interfaces.
 * Introduce YAI (Yet-Another-Interface) either:
 ** as a cutdown subset of current RSS; the new Interface would assume the RSS 
name. Internally we'd have a new more featureful Interface named InternalRSS or 
some such (why not just use HRS in this case?)
 ** Or we build a dedicated Interface for CP usage only giving it the (ugly?) 
name of CPRSS so no confusion around who the audience is.

For the latter, we'd still have a cleanup job to do around when HRS and when 
use the Interface instead.

Lets figure this out quick so we can cut an alpha4. The CP API is dependent on 
this decision. In fact, let me surface this on the dev list.

> MasterServices needs a short-back-and-sides; pare-back exposure of internals 
> and IA.Private classes
> ---
>
> Key: HBASE-12260
> URL: https://issues.apache.org/jira/browse/HBASE-12260
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Reporter: ryan rawson
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-12260.master.001.patch, 
> HBASE-12260.master.002.patch, HBASE-12260.master.003.patch, 
> HBASE-12260.master.004.patch, HBASE-12260.master.005.patch, 
> HBASE-12260.master.006.patch, HBASE-12260.master.007.patch, 
> HBASE-12260.master.008.patch, HBASE-12260.master.009.patch, 
> HBASE-12260.master.010.patch, HBASE-12260.master.011.patch, 
> HBASE-12260.master.011.patch, HBASE-12260.master.012.patch, 
> HBASE-12260.master.013.patch, HBASE-12260.master.014.patch
>
>
> A major issue with MasterServices is the MasterCoprocessorEnvironment exposes 
> this class even though MasterServices is tagged with 
> @InterfaceAudience.Private
> This means that the entire internals of the HMaster is essentially part of 
> the coprocessor API.  Many of the classes returned by the MasterServices API 
> are highly internal, extremely powerful, and subject to constant change.  
> Perhaps a new API to replace MasterServices that is use-case focused, and 
> justified based on real world co-processors would suit things better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18352) Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614

2017-10-13 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203838#comment-16203838
 ] 

Vladimir Rodionov commented on HBASE-18352:
---

no problem, go ahead.

> Enable Replica tests that were disabled by Proc-V2 AM in HBASE-14614
> 
>
> Key: HBASE-18352
> URL: https://issues.apache.org/jira/browse/HBASE-18352
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0-alpha-1
>Reporter: Stephen Yuan Jiang
>Assignee: Vladimir Rodionov
>
> The following replica tests were disabled by Core Proc-V2 AM in HBASE-14614:
> - Disabled parts of...testCreateTableWithMultipleReplicas in 
> TestMasterOperationsForRegionReplicas There is an issue w/ assigning more 
> replicas if number of replicas is changed on us. See '/* DISABLED! FOR 
> NOW'.
> - Disabled testRegionReplicasOnMidClusterHighReplication in 
> TestStochasticLoadBalancer2
> - Disabled testFlushAndCompactionsInPrimary in TestRegionReplicas
> This JIRA tracks the work to enable them (or modify/remove if not applicable).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-12260) MasterServices needs a short-back-and-sides; pare-back exposure of internals and IA.Private classes

2017-10-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203843#comment-16203843
 ] 

stack commented on HBASE-12260:
---

Hmm. On the thread on the dev list I state the patten I adopt doing this 
MasterServices refactor and even ask if we should prefix all CP class with a 
CP. That part of the email didn't get any commentary. My guess is that the 
implications are not commonly understood (and I didn't do a good enough job 
explaining).

> MasterServices needs a short-back-and-sides; pare-back exposure of internals 
> and IA.Private classes
> ---
>
> Key: HBASE-12260
> URL: https://issues.apache.org/jira/browse/HBASE-12260
> Project: HBase
>  Issue Type: Sub-task
>  Components: master
>Reporter: ryan rawson
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-12260.master.001.patch, 
> HBASE-12260.master.002.patch, HBASE-12260.master.003.patch, 
> HBASE-12260.master.004.patch, HBASE-12260.master.005.patch, 
> HBASE-12260.master.006.patch, HBASE-12260.master.007.patch, 
> HBASE-12260.master.008.patch, HBASE-12260.master.009.patch, 
> HBASE-12260.master.010.patch, HBASE-12260.master.011.patch, 
> HBASE-12260.master.011.patch, HBASE-12260.master.012.patch, 
> HBASE-12260.master.013.patch, HBASE-12260.master.014.patch
>
>
> A major issue with MasterServices is the MasterCoprocessorEnvironment exposes 
> this class even though MasterServices is tagged with 
> @InterfaceAudience.Private
> This means that the entire internals of the HMaster is essentially part of 
> the coprocessor API.  Many of the classes returned by the MasterServices API 
> are highly internal, extremely powerful, and subject to constant change.  
> Perhaps a new API to replace MasterServices that is use-case focused, and 
> justified based on real world co-processors would suit things better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18986) Remove unnecessary null check after CellUtil.cloneQualifier()

2017-10-13 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203859#comment-16203859
 ] 

Xiang Li commented on HBASE-18986:
--

[~jerryhe], could you please help to review the patch? All UT passed on my 
local machine, but I am not able to start Hadoop QA by "Submit patch". Tried a 
couple of times.

> Remove unnecessary null check after CellUtil.cloneQualifier()
> -
>
> Key: HBASE-18986
> URL: https://issues.apache.org/jira/browse/HBASE-18986
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18986.master.000.patch
>
>
> In master branch,
> {code:title=hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java|borderStyle=solid}
> // From line 2858
> public void prepareDeleteTimestamps(Mutation mutation, Map List> familyMap,
>   byte[] byteNow) throws IOException {
> for (Map.Entry> e : familyMap.entrySet()) {
>   // ...
>   for (int i=0; i < listSize; i++) {
> // ...
> if (cell.getTimestamp() == HConstants.LATEST_TIMESTAMP && 
> CellUtil.isDeleteType(cell)) {
>   byte[] qual = CellUtil.cloneQualifier(cell);
>   if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY; // <-- here
>   ...
> {code}
> Might {{if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY;}} be removed?
> Could it be null after CellUtil.cloneQualifier()?
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java|borderStyle=solid}
> public static byte[] cloneQualifier(Cell cell){
>   byte[] output = new byte[cell.getQualifierLength()];
>   copyQualifierTo(cell, output, 0);
>   return output;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18986) Remove unnecessary null check after CellUtil.cloneQualifier()

2017-10-13 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18986:
-
Status: Open  (was: Patch Available)

> Remove unnecessary null check after CellUtil.cloneQualifier()
> -
>
> Key: HBASE-18986
> URL: https://issues.apache.org/jira/browse/HBASE-18986
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18986.master.000.patch
>
>
> In master branch,
> {code:title=hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java|borderStyle=solid}
> // From line 2858
> public void prepareDeleteTimestamps(Mutation mutation, Map List> familyMap,
>   byte[] byteNow) throws IOException {
> for (Map.Entry> e : familyMap.entrySet()) {
>   // ...
>   for (int i=0; i < listSize; i++) {
> // ...
> if (cell.getTimestamp() == HConstants.LATEST_TIMESTAMP && 
> CellUtil.isDeleteType(cell)) {
>   byte[] qual = CellUtil.cloneQualifier(cell);
>   if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY; // <-- here
>   ...
> {code}
> Might {{if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY;}} be removed?
> Could it be null after CellUtil.cloneQualifier()?
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java|borderStyle=solid}
> public static byte[] cloneQualifier(Cell cell){
>   byte[] output = new byte[cell.getQualifierLength()];
>   copyQualifierTo(cell, output, 0);
>   return output;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18986) Remove unnecessary null check after CellUtil.cloneQualifier()

2017-10-13 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18986:
-
Status: Patch Available  (was: Open)

> Remove unnecessary null check after CellUtil.cloneQualifier()
> -
>
> Key: HBASE-18986
> URL: https://issues.apache.org/jira/browse/HBASE-18986
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18986.master.000.patch
>
>
> In master branch,
> {code:title=hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java|borderStyle=solid}
> // From line 2858
> public void prepareDeleteTimestamps(Mutation mutation, Map List> familyMap,
>   byte[] byteNow) throws IOException {
> for (Map.Entry> e : familyMap.entrySet()) {
>   // ...
>   for (int i=0; i < listSize; i++) {
> // ...
> if (cell.getTimestamp() == HConstants.LATEST_TIMESTAMP && 
> CellUtil.isDeleteType(cell)) {
>   byte[] qual = CellUtil.cloneQualifier(cell);
>   if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY; // <-- here
>   ...
> {code}
> Might {{if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY;}} be removed?
> Could it be null after CellUtil.cloneQualifier()?
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java|borderStyle=solid}
> public static byte[] cloneQualifier(Cell cell){
>   byte[] output = new byte[cell.getQualifierLength()];
>   copyQualifierTo(cell, output, 0);
>   return output;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16868) Add a replicate_all flag to avoid misuse the namespaces and table-cfs config of replication peer

2017-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16203866#comment-16203866
 ] 

Hadoop QA commented on HBASE-16868:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 5s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 14 new + 312 unchanged - 8 fixed = 
326 total (was 320) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 32 new + 321 unchanged - 1 fixed = 
353 total (was 322) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 
51s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Assigned] (HBASE-17449) Add explicit document on different timeout settings

2017-10-13 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reassigned HBASE-17449:
-

Assignee: Yu Li

> Add explicit document on different timeout settings
> ---
>
> Key: HBASE-17449
> URL: https://issues.apache.org/jira/browse/HBASE-17449
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Critical
>
> Currently we have more than one timeout settings, mainly includes:
> * hbase.rpc.timeout
> * hbase.client.operation.timeout
> * hbase.client.scanner.timeout.period
> And in latest branch-1 or master branch code, we will have two other 
> properties:
> * hbase.rpc.read.timeout
> * hbase.rpc.write.timeout
> However, in current refguid we don't have explicit instruction on the 
> difference of these timeout settings (there're explanations for each 
> property, but no instruction on when to use which)
> In my understanding, for RPC layer timeout, or say each rpc call:
> * Scan (openScanner/next): controlled by hbase.client.scanner.timeout.period
> * Other operations:
>1. For released versions: controlled by hbase.rpc.timeout
>2. For 1.4+ versions: read operation controlled by hbase.rpc.read.timeout, 
> write operation controlled by hbase.rpc.write.timeout, or hbase.rpc.timeout 
> if the previous two are not set.
> And hbase.client.operation.timeout is a higher-level control counting retry 
> in, or say the overall control for one user call.
> After this JIRA, I hope when users ask questions like "What settings I should 
> use if I don't want to wait for more than 1 second for a single 
> put/get/scan.next call", we could give a neat answer.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18505) Our build/yetus personality will run tests on individual modules and then on all (i.e. 'root'). Should do one or other

2017-10-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18505:
--
   Resolution: Fixed
Fix Version/s: (was: 2.0.0-beta-1)
   (was: 1.5.0)
   2.0.0-alpha-4
   Status: Resolved  (was: Patch Available)

pushed to all active branches. thanks for the review, busbey!

> Our build/yetus personality will run tests on individual modules and then on 
> all (i.e. 'root'). Should do one or other
> --
>
> Key: HBASE-18505
> URL: https://issues.apache.org/jira/browse/HBASE-18505
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: stack
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: HBASE-18505.patch, HBASE-18505.v2.patch
>
>
> In runs on end of HBASE-17056, a patch that touches all modules, [~busbey] 
> noticed that we were doing unit suite twice... Once for each individual 
> module and then again for all/root because patch had root changes in it. We 
> shouldn't do all if we are doing 'root' as per [~busbey]
> Here is tail of console output:
> {code}
> 
> 10:50:30 cd /testptch/hbase/hbase-spark
> 10:50:30 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-hbase-spark.txt 2>&1
> 10:55:35 Elapsed:   5m 14s
> 10:55:45 cd /testptch/hbase
> 10:55:45 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-root.txt 2>&1
> 13:00:13 Build was aborted
> ...
> {code}
> I'd aborted the run because it seemed to be taking too long but on subsequent 
> examination, it was actually making progress.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18505) Our build/yetus personality will run tests on individual modules and then on all (i.e. 'root'). Should do one or other

2017-10-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18505:
--
Attachment: HBASE-18505.actual.patch

attaching the actual patch pulled, it does not have the extra whitespace 
changes.

> Our build/yetus personality will run tests on individual modules and then on 
> all (i.e. 'root'). Should do one or other
> --
>
> Key: HBASE-18505
> URL: https://issues.apache.org/jira/browse/HBASE-18505
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: stack
>Assignee: Mike Drob
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 1.1.13, 2.0.0-alpha-4
>
> Attachments: HBASE-18505.actual.patch, HBASE-18505.patch, 
> HBASE-18505.v2.patch
>
>
> In runs on end of HBASE-17056, a patch that touches all modules, [~busbey] 
> noticed that we were doing unit suite twice... Once for each individual 
> module and then again for all/root because patch had root changes in it. We 
> shouldn't do all if we are doing 'root' as per [~busbey]
> Here is tail of console output:
> {code}
> 
> 10:50:30 cd /testptch/hbase/hbase-spark
> 10:50:30 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-hbase-spark.txt 2>&1
> 10:55:35 Elapsed:   5m 14s
> 10:55:45 cd /testptch/hbase
> 10:55:45 mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hbase-master-patch-1 
> -DHBasePatchProcess -PrunAllTests 
> -Dtest.exclude.pattern=**/master.procedure.TestProcedureAdmin.java,**/master.assignment.TestMergeTableRegionsProcedure.java,**/quotas.TestSnapshotQuotaObserverChore.java,**/quotas.TestQuotaThrottle.java,**/client.TestReplicasClient.java,**/client.locking.TestEntityLocks.java,**/security.visibility.TestVisibilityLabelsReplication.java,**/client.TestShell.java,**/master.assignment.TestAssignmentManager.java,**/replication.TestMultiSlaveReplication.java,**/coprocessor.TestRegionObserverInterface.java,**/master.balancer.TestDefaultLoadBalancer.java,**/client.TestReplicaWithCluster.java,**/io.hfile.TestLruBlockCache.java,**/master.balancer.TestFavoredStochasticLoadBalancer.java,**/regionserver.wal.TestAsyncLogRolling.java,**/master.balancer.TestStochasticLoadBalancer.java,**/client.TestMultiParallel.java,**/replication.TestReplicationWithTags.java,**/security.access.TestCoprocessorWhitelistMasterObserver.java,**/replication.regionserver.TestReplicator.java,**/master.assignment.TestAssignmentOnRSCrash.java,**/master.procedure.TestMasterFailoverWithProcedures.java,**/quotas.TestQuotaStatusRPCs.java,**/regionserver.TestHRegionWithInMemoryFlush.java,**/master.cleaner.TestHFileCleaner.java
>  clean test -fae > /testptch/patchprocess/patch-unit-root.txt 2>&1
> 13:00:13 Build was aborted
> ...
> {code}
> I'd aborted the run because it seemed to be taking too long but on subsequent 
> examination, it was actually making progress.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >