[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862511#comment-13862511
 ] 

Hadoop QA commented on HBASE-10274:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12621511/HBASE-10274-truck-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12621511

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8342//console

This message is automatically generated.

 MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
 ---

 Key: HBASE-10274
 URL: https://issues.apache.org/jira/browse/HBASE-10274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao
Assignee: chendihao
Priority: Minor
 Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-truck-v1.patch


 HBASE-6820 points out the problem but not fix completely.
 killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
 shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862512#comment-13862512
 ] 

Hadoop QA commented on HBASE-10274:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12621511/HBASE-10274-truck-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12621511

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8341//console

This message is automatically generated.

 MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
 ---

 Key: HBASE-10274
 URL: https://issues.apache.org/jira/browse/HBASE-10274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao
Assignee: chendihao
Priority: Minor
 Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-truck-v1.patch


 HBASE-6820 points out the problem but not fix completely.
 killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
 shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-10130:
--

Assignee: Ted Yu

 TestSplitLogManager#testTaskResigned fails sometimes
 

 Key: HBASE-10130
 URL: https://issues.apache.org/jira/browse/HBASE-10130
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 10130-output.txt, 10130-v1.txt


 The test failed in 
 https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
 For testTaskResigned() :
 {code}
 int version = ZKUtil.checkExists(zkw, tasknode);
 // Could be small race here.
 if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
 to/2);
 {code}
 There was no log similar to the following (corresponding to waitForCounter() 
 call above):
 {code}
 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
 milli-secs(wait.for.ratio=[1])
 {code}
 Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
 retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10130:
---

Status: Patch Available  (was: Open)

 TestSplitLogManager#testTaskResigned fails sometimes
 

 Key: HBASE-10130
 URL: https://issues.apache.org/jira/browse/HBASE-10130
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 10130-output.txt, 10130-v1.txt


 The test failed in 
 https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
 For testTaskResigned() :
 {code}
 int version = ZKUtil.checkExists(zkw, tasknode);
 // Could be small race here.
 if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
 to/2);
 {code}
 There was no log similar to the following (corresponding to waitForCounter() 
 call above):
 {code}
 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
 milli-secs(wait.for.ratio=[1])
 {code}
 Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
 retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10130:
---

Attachment: 10130-v1.txt

 TestSplitLogManager#testTaskResigned fails sometimes
 

 Key: HBASE-10130
 URL: https://issues.apache.org/jira/browse/HBASE-10130
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor
 Attachments: 10130-output.txt, 10130-v1.txt


 The test failed in 
 https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
 For testTaskResigned() :
 {code}
 int version = ZKUtil.checkExists(zkw, tasknode);
 // Could be small race here.
 if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
 to/2);
 {code}
 There was no log similar to the following (corresponding to waitForCounter() 
 call above):
 {code}
 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
 milli-secs(wait.for.ratio=[1])
 {code}
 Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
 retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862703#comment-13862703
 ] 

Hadoop QA commented on HBASE-10130:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621542/10130-v1.txt
  against trunk revision .
  ATTACHMENT ID: 12621542

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8343//console

This message is automatically generated.

 TestSplitLogManager#testTaskResigned fails sometimes
 

 Key: HBASE-10130
 URL: https://issues.apache.org/jira/browse/HBASE-10130
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 10130-output.txt, 10130-v1.txt


 The test failed in 
 https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
 For testTaskResigned() :
 {code}
 int version = ZKUtil.checkExists(zkw, tasknode);
 // Could be small race here.
 if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
 to/2);
 {code}
 There was no log similar to the following (corresponding to waitForCounter() 
 call above):
 {code}
 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
 milli-secs(wait.for.ratio=[1])
 {code}
 Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
 retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)
chendihao created HBASE-10282:
-

 Summary: We can't assure that the first ZK server is active server 
in MiniZooKeeperCluster
 Key: HBASE-10282
 URL: https://issues.apache.org/jira/browse/HBASE-10282
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao
Priority: Minor


Thanks to https://issues.apache.org/jira/browse/HBASE-10274, we're able to run 
multiple zk servers in minicluster. However, It's confusing to keep the 
variable activeZKServerIndex as zero and assure the first zk server is always 
the active one. I think returning the first sever's client port is for testing 
and it seems that we can directly return the first item of the list. Anyway, 
the concept of active here is not the same as zk's. 

It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862720#comment-13862720
 ] 

chendihao commented on HBASE-10282:
---

Looking for your replay. [~stack] [~liyin] [~streamy]

 We can't assure that the first ZK server is active server in 
 MiniZooKeeperCluster
 -

 Key: HBASE-10282
 URL: https://issues.apache.org/jira/browse/HBASE-10282
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao
Priority: Minor

 Thanks to https://issues.apache.org/jira/browse/HBASE-10274, we're able to 
 run multiple zk servers in minicluster. However, It's confusing to keep the 
 variable activeZKServerIndex as zero and assure the first zk server is always 
 the active one. I think returning the first sever's client port is for 
 testing and it seems that we can directly return the first item of the list. 
 Anyway, the concept of active here is not the same as zk's. 
 It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HBASE-10282:
--

Description: 
Thanks to HBASE-10274, we're able to run multiple zk servers in minicluster. 
However, It's confusing to keep the variable activeZKServerIndex as zero and 
assure the first zk server is always the active one. I think returning the 
first sever's client port is for testing and it seems that we can directly 
return the first item of the list. Anyway, the concept of active here is not 
the same as zk's. 

It's confusing when I read the code so I think we should fix it.

  was:
Thanks to https://issues.apache.org/jira/browse/HBASE-10274, we're able to run 
multiple zk servers in minicluster. However, It's confusing to keep the 
variable activeZKServerIndex as zero and assure the first zk server is always 
the active one. I think returning the first sever's client port is for testing 
and it seems that we can directly return the first item of the list. Anyway, 
the concept of active here is not the same as zk's. 

It's confusing when I read the code so I think we should fix it.


 We can't assure that the first ZK server is active server in 
 MiniZooKeeperCluster
 -

 Key: HBASE-10282
 URL: https://issues.apache.org/jira/browse/HBASE-10282
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao
Priority: Minor

 Thanks to HBASE-10274, we're able to run multiple zk servers in minicluster. 
 However, It's confusing to keep the variable activeZKServerIndex as zero and 
 assure the first zk server is always the active one. I think returning the 
 first sever's client port is for testing and it seems that we can directly 
 return the first item of the list. Anyway, the concept of active here is 
 not the same as zk's. 
 It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HBASE-10282:
--

Description: 
Thanks to HBASE-3052, we're able to run multiple zk servers in minicluster. 
However, It's confusing to keep the variable activeZKServerIndex as zero and 
assure the first zk server is always the active one. I think returning the 
first sever's client port is for testing and it seems that we can directly 
return the first item of the list. Anyway, the concept of active here is not 
the same as zk's. 

It's confusing when I read the code so I think we should fix it.

  was:
Thanks to HBASE-10274, we're able to run multiple zk servers in minicluster. 
However, It's confusing to keep the variable activeZKServerIndex as zero and 
assure the first zk server is always the active one. I think returning the 
first sever's client port is for testing and it seems that we can directly 
return the first item of the list. Anyway, the concept of active here is not 
the same as zk's. 

It's confusing when I read the code so I think we should fix it.


 We can't assure that the first ZK server is active server in 
 MiniZooKeeperCluster
 -

 Key: HBASE-10282
 URL: https://issues.apache.org/jira/browse/HBASE-10282
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao
Priority: Minor

 Thanks to HBASE-3052, we're able to run multiple zk servers in minicluster. 
 However, It's confusing to keep the variable activeZKServerIndex as zero and 
 assure the first zk server is always the active one. I think returning the 
 first sever's client port is for testing and it seems that we can directly 
 return the first item of the list. Anyway, the concept of active here is 
 not the same as zk's. 
 It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10154) Add a unit test for Canary tool

2014-01-05 Thread takeshi.miao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862744#comment-13862744
 ] 

takeshi.miao commented on HBASE-10154:
--

hi [~stack] I can wait for HBASE-10147 and revise this patch due to these two 
patches have some conflicts need to be resolved.

 Add a unit test for Canary tool
 ---

 Key: HBASE-10154
 URL: https://issues.apache.org/jira/browse/HBASE-10154
 Project: HBase
  Issue Type: Improvement
  Components: monitoring, test
Reporter: takeshi.miao
Assignee: takeshi.miao
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10154-trunk-v01.patch, 
 HBASE-10154-trunk-v02.patch, HBASE-10154-trunk-v03.patch


 Due to HBASE-10108, I am working to come out a unit test for 
 o.h.hbase.tool.Canary to eliminate this kind of issue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10147) Canary additions

2014-01-05 Thread takeshi.miao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862747#comment-13862747
 ] 

takeshi.miao commented on HBASE-10147:
--

Hi [~gustavoanatoly]
Leaving the legacy one '_Sink.publishReadTiming_' is fine to me, due to I think 
that their purpose is somehow different, 1) '_Sink.publishInfo_' is used to 
prompt user that what portion of the HBase the _monitor_ is going to test, 2) 
'_Sink.publishReadTiming_' is used to tell the user how long the test took.

[~stacke] How do you think ?

A reminder, the abbreviation seems need to be revised.
{code}
...tool.Canary (Canary.java:publishInfo(103)) - Read from Region: ...
...tool.Canary (Canary.java:publishReadTiming(97)) - read from region ...
{code}

 Canary additions
 

 Key: HBASE-10147
 URL: https://issues.apache.org/jira/browse/HBASE-10147
 Project: HBase
  Issue Type: Improvement
Reporter: stack
Assignee: Gustavo Anatoly
 Attachments: HBASE-10147-v2.patch, HBASE-10147-v3.patch, 
 HBASE-10147-v4.patch, HBASE-10147.patch, HBASE-10147.patch, 
 HBASE-10147.patch, HBASE-10147.patch


 I've been using the canary to quickly identify the dodgy machine in my 
 cluster.  It is useful for this.  What would  make it better would be:
 + Rather than saying how long it took to get a region after you have gotten 
 the region, it'd be sweet to log BEFORE you went to get the region the 
 regionname and the server it is on.  I ask for this because as is, I have to 
 wait for the canary to timeout which can be a while.
 + Second ask is that when I pass the -t, that when it fails, it says what it 
 failed against -- what region and hopefully what server location (might be 
 hard).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10283) Client can't connect with all the running zk servers in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)
chendihao created HBASE-10283:
-

 Summary: Client can't connect with all the running zk servers in 
MiniZooKeeperCluster
 Key: HBASE-10283
 URL: https://issues.apache.org/jira/browse/HBASE-10283
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao


Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self use the port and ZKConfig will 
ignore the other two servers which have the same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. But apparently we should. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10283) Client can't connect with all the running zk servers in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HBASE-10283:
--

Description: 
Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
(then it throws exceptions..)
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self use the port and ZKConfig will 
ignore the other two servers which have the same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. But apparently we should. 

  was:
Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self use the port and ZKConfig will 
ignore the other two servers which have the same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. But apparently we should. 


 Client can't connect with all the running zk servers in MiniZooKeeperCluster
 

 Key: HBASE-10283
 URL: https://issues.apache.org/jira/browse/HBASE-10283
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao

 Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
 problem is that client can only connect with the first zk server and if you 
 kill the first one, it fails to access the 

[jira] [Updated] (HBASE-10283) Client can't connect with all the running zk servers in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HBASE-10283:
--

Description: 
Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
(then it throws exceptions..)
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self force to use the same port for 
each zk server and ZKConfig will ignore the other two servers which have the 
same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. But apparently we should. 

  was:
Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
(then it throws exceptions..)
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self use the port and ZKConfig will 
ignore the other two servers which have the same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. But apparently we should. 


 Client can't connect with all the running zk servers in MiniZooKeeperCluster
 

 Key: HBASE-10283
 URL: https://issues.apache.org/jira/browse/HBASE-10283
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao

 Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
 problem is that client can only connect with the 

[jira] [Updated] (HBASE-10283) Client can't connect with all the running zk servers in MiniZooKeeperCluster

2014-01-05 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HBASE-10283:
--

Description: 
Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
(then it throws exceptions..)
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self force to use the same port for 
each zk server and ZKConfig will ignore the other two servers which have the 
same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. 

  was:
Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
problem is that client can only connect with the first zk server and if you 
kill the first one, it fails to access the cluster even though other zk servers 
are serving.

It's easy to repro.  Firstly `TEST_UTIL.startMiniZKCluster(3)`. Secondly call 
`killCurrentActiveZooKeeperServer` in MiniZooKeeperCluster. Then when you 
construct the zk client, it can't connect with the zk cluster for any way. Here 
is the simple log you can refer.
{noformat}
2014-01-03 12:06:58,625 INFO  [main] zookeeper.MiniZooKeeperCluster(194): 
Started MiniZK Cluster and connect 1 ZK server on client port: 55227
..
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(264): Kill 
the current active ZK servers in the cluster on client port: 55227
2014-01-03 12:06:59,134 INFO  [main] zookeeper.MiniZooKeeperCluster(272): 
Activate a backup zk server in the cluster on client port: 55228
2014-01-03 12:06:59,366 INFO  [main-EventThread] zookeeper.ZooKeeper(434): 
Initiating client connection, connectString=localhost:55227 sessionTimeout=3000 
watcher=com.xiaomi.infra.timestamp.TimestampWatcher@a383118
(then it throws exceptions..)
{noformat}

The log is kind of problematic because it always show Started MiniZK Cluster 
and connect 1 ZK server but actually there're three zk servers.

Looking deeply we find that the client is still trying to connect with the dead 
zk server's port. When I print out the zkQuorum it used, only the first zk 
server's hostport is there and it will not change no matter you kill the server 
or not. The reason for this is in ZKConfig which will convert HBase settings 
into zk's. MiniZooKeeperCluster create three servers with the same host name, 
localhost, and different ports. But HBase self force to use the same port for 
each zk server and ZKConfig will ignore the other two servers which have the 
same host name.

MiniZooKeeperCluster works improperly before we fix this. The bug is not found 
because we never test whether HBase works or not if we kill the zk active or 
backup servers in ut. But apparently we should. 


 Client can't connect with all the running zk servers in MiniZooKeeperCluster
 

 Key: HBASE-10283
 URL: https://issues.apache.org/jira/browse/HBASE-10283
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3
Reporter: chendihao

 Refer to HBASE-3052, multiple zk servers can run together in minicluster. The 
 problem is that client can only connect with 

[jira] [Updated] (HBASE-9426) Make custom distributed barrier procedure pluggable

2014-01-05 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated HBASE-9426:


Attachment: HBASE-9426-6.patch

The new patch added a new unit test which implements a simple/trivial user 
procedure manager (for both master and region server).

It seems that more code can be pushed down to the framework.

 Make custom distributed barrier procedure pluggable 
 

 Key: HBASE-9426
 URL: https://issues.apache.org/jira/browse/HBASE-9426
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.95.2, 0.94.11
Reporter: Richard Ding
Assignee: Richard Ding
 Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, 
 HBASE-9426-6.patch, HBASE-9426.patch.1, HBASE-9426.patch.2, HBASE-9426.patch.3


 Currently if one wants to implement a custom distributed barrier procedure 
 (e.g., distributed log roll or distributed table flush), the HBase core code 
 needs to be modified in order for the procedure to work.
 Looking into the snapshot code (especially on region server side), most of 
 the code to enable the procedure are generic life-cycle management (i.e., 
 init, start, stop). We can make this part pluggable.
 Here is the proposal. Following the coprocessor example, we define two 
 properties:
 {code}
 hbase.procedure.regionserver.classes
 hbase.procedure.master.classes
 {code}
 The values for both are comma delimited list of classes. On region server 
 side, the classes implements the following interface:
 {code}
 public interface RegionServerProcedureManager {
   public void initialize(RegionServerServices rss) throws KeeperException;
   public void start();
   public void stop(boolean force) throws IOException;
   public String getProcedureName();
 }
 {code}
 While on Master side, the classes implement the interface:
 {code}
 public interface MasterProcedureManager {
   public void initialize(MasterServices master) throws KeeperException, 
 IOException, UnsupportedOperationException;
   public void stop(String why);
   public String getProcedureName();
   public void execProcedure(ProcedureDescription desc) throws IOException;
   IOException;
 }
 {code}
 Where the ProcedureDescription is defined as
 {code}
 message ProcedureDescription {
   required string name = 1;
   required string instance = 2;
   optional int64 creationTime = 3 [default = 0];
   message Property {
 required string tag = 1;
 optional string value = 2;
   }
   repeated Property props = 4;
 }
 {code}
 A generic API can be defined on HMaster to trigger a procedure:
 {code}
 public boolean execProcedure(ProcedureDescription desc) throws IOException;
 {code}
 _SnapshotManager_ and _RegionServerSnapshotManager_ are special examples of 
 _MasterProcedureManager_ and _RegionServerProcedureManager_. They will be 
 automatically included (users don't need to specify them in the conf file).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8889) TestIOFencing#testFencingAroundCompaction occasionally fails

2014-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862771#comment-13862771
 ] 

Ted Yu commented on HBASE-8889:
---

{code}
java.lang.AssertionError: After compaction, does not exist: hdfs:// 
localhost:40776/user/jenkins/hbase/data/default/tabletest/d4de4838a42b114f6d5562a3a4a890c1/family/69dd2c0e690649749ac77636b0d49698
{code}
The above assertion failure seems to be caused by:
{code}
...
2013-12-28 03:13:53,752 DEBUG 
[RS:0;asf002:54266-shortCompactions-1388200422935] backup.HFileArchiver(438): 
Finished archiving from class org.apache.hadoop.hbase.backup. 
HFileArchiver$FileableStoreFile, 
file:hdfs://localhost:40776/user/jenkins/hbase/data/default/tabletest/d4de4838a42b114f6d5562a3a4a890c1/family/
   69dd2c0e690649749ac77636b0d49698, to 
hdfs://localhost:40776/user/jenkins/hbase/archive/data/default/tabletest/d4de4838a42b114f6d5562a3a4a890c1/family/
69dd2c0e690649749ac77636b0d49698
{code}

 TestIOFencing#testFencingAroundCompaction occasionally fails
 

 Key: HBASE-8889
 URL: https://issues.apache.org/jira/browse/HBASE-8889
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor
 Attachments: TestIOFencing.tar.gz


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6232//testReport/org.apache.hadoop.hbase/TestIOFencing/testFencingAroundCompaction/
  :
 {code}
 java.lang.AssertionError: Timed out waiting for new server to open region
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.apache.hadoop.hbase.TestIOFencing.doTest(TestIOFencing.java:269)
   at 
 org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:205)
 {code}
 {code}
 2013-07-06 23:13:53,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:54,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] 
 hbase.TestIOFencing$CompactionBlockerRegion(102): allowing compactions
 2013-07-06 23:13:55,121 INFO  [pool-1-thread-1] 
 hbase.HBaseTestingUtility(911): Shutting down minicluster
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): 
 Shutting down HBase Cluster
 2013-07-06 23:13:55,121 INFO  
 [RS:0;asf002:39065-smallCompactions-1373152134716] regionserver.HStore(951): 
 Starting compaction of 2 file(s) in family of 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. into 
 tmpdir=hdfs://localhost:50140/user/jenkins/hbase/tabletest/6e62d3b24ea23160931362b60359ff03/.tmp,
  totalSize=108.4k
 ...
 2013-07-06 23:13:55,155 INFO  [RS:0;asf002:39065] 
 regionserver.HRegionServer(2476): Received CLOSE for the region: 
 6e62d3b24ea23160931362b60359ff03 ,which we are already trying to CLOSE
 2013-07-06 23:13:55,157 WARN  [RS:0;asf002:39065] 
 regionserver.HRegionServer(2414): Failed to close 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. - ignoring and 
 continuing
 org.apache.hadoop.hbase.exceptions.NotServingRegionException: The region 
 6e62d3b24ea23160931362b60359ff03 was already closing. New CLOSE request is 
 ignored.
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2479)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2409)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2011)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:903)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:337)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1131)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41)
   at 

[jira] [Commented] (HBASE-9426) Make custom distributed barrier procedure pluggable

2014-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862785#comment-13862785
 ] 

Hadoop QA commented on HBASE-9426:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621555/HBASE-9426-6.patch
  against trunk revision .
  ATTACHMENT ID: 12621555

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private ProcedureDescription(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private ExecProcedureRequest(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private ExecProcedureResponse(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private IsProcedureDoneRequest(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+private IsProcedureDoneResponse(boolean noInit) { this.unknownFields = 
com.google.protobuf.UnknownFieldSet.getDefaultInstance(); }
+   * coderpc IsProcedureDone(.IsProcedureDoneRequest) returns 
(.IsProcedureDoneResponse);/code

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8344//console

This message is automatically generated.

 Make custom distributed barrier procedure pluggable 
 

 Key: HBASE-9426
 URL: https://issues.apache.org/jira/browse/HBASE-9426
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.95.2, 0.94.11
Reporter: Richard Ding
Assignee: Richard Ding
 Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, 
 HBASE-9426-6.patch, HBASE-9426.patch.1, HBASE-9426.patch.2, HBASE-9426.patch.3


 Currently if one wants to implement a custom distributed barrier procedure 
 (e.g., distributed log roll or distributed table flush), the HBase core code 
 needs to be modified in order for the procedure to work.
 Looking into the snapshot code (especially on region server side), most of 
 the code to enable the procedure are generic life-cycle management (i.e., 
 init, start, stop). We can make this part pluggable.
 Here 

[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2014-01-05 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9117:


Attachment: HBASE-9117.03.patch

Rebased to trunk. Fixed a handful of hanging tests. Not sure what was up with 
the hadoop-1.0 profile build, works fine locally.

I'd like to start a conversation about deprecating the HTable(Configuration) 
constructors in favor of the externally managed HConnection constructors. IMHO, 
in the absence of thorough documentation, the former provide too much rope.

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, 
 HBASE-9117.02.patch, HBASE-9117.03.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2014-01-05 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9117:


Status: Open  (was: Patch Available)

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, 
 HBASE-9117.02.patch, HBASE-9117.03.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2014-01-05 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9117:


Status: Patch Available  (was: Open)

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, 
 HBASE-9117.02.patch, HBASE-9117.03.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2014-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862791#comment-13862791
 ] 

Hadoop QA commented on HBASE-9117:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621562/HBASE-9117.03.patch
  against trunk revision .
  ATTACHMENT ID: 12621562

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 120 
new or modified tests.

{color:red}-1 hadoop1.0{color}.  The patch failed to compile against the 
hadoop 1.0 profile.
Here is snippet of errors:
{code}[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-common: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[51,15]
 sun.misc.Unsafe is Sun proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ReflectionUtils.java:[23,20]
 package java.nio.file does not exist
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[1110,19]
 sun.misc.Unsafe is Sun proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[1116,21]
 sun.misc.Unsafe is Sun proprietary API and may be removed in a future release
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[1121,28]
 sun.misc.Unsafe is Sun proprietary API and may be removed in a future release
--
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hbase-common: Compilation failure
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
--
Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation 
failure
at 
org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729)
at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:128)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more{code}

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8345//console

This message is automatically generated.

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, 
 HBASE-9117.02.patch, HBASE-9117.03.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10284) Build broken with svn 1.8

2014-01-05 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-10284:
-

 Summary: Build broken with svn 1.8
 Key: HBASE-10284
 URL: https://issues.apache.org/jira/browse/HBASE-10284
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl


Just upgraded my machine and found that {{svn info}} displays a Relative URL: 
line in svn 1.8.
saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10284) Build broken with svn 1.8

2014-01-05 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10284:
--

Fix Version/s: 0.99.0
   0.96.2
   0.94.16
   0.98.0

 Build broken with svn 1.8
 -

 Key: HBASE-10284
 URL: https://issues.apache.org/jira/browse/HBASE-10284
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0

 Attachments: 10284.txt


 Just upgraded my machine and found that {{svn info}} displays a Relative 
 URL: line in svn 1.8.
 saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10284) Build broken with svn 1.8

2014-01-05 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862799#comment-13862799
 ] 

Lars Hofhansl commented on HBASE-10284:
---

[~stack], [~apurtell], FYI. Should be in all branches.

 Build broken with svn 1.8
 -

 Key: HBASE-10284
 URL: https://issues.apache.org/jira/browse/HBASE-10284
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0

 Attachments: 10284.txt


 Just upgraded my machine and found that {{svn info}} displays a Relative 
 URL: line in svn 1.8.
 saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10284) Build broken with svn 1.8

2014-01-05 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10284:
--

Attachment: 10284.txt

Simple fix. Enforces that the URL: is in the beginning of the line. Works with 
both old and new svns.

 Build broken with svn 1.8
 -

 Key: HBASE-10284
 URL: https://issues.apache.org/jira/browse/HBASE-10284
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0

 Attachments: 10284.txt


 Just upgraded my machine and found that {{svn info}} displays a Relative 
 URL: line in svn 1.8.
 saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2014-01-05 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862800#comment-13862800
 ] 

Nick Dimiduk commented on HBASE-9117:
-

I haven't touched Bytes.java in this patch... I think it's not me, Jenkins; 
it's you. Would one of you fine karmic Jenkins fellows ([~stack], [~apurtell], 
[~yuzhih...@gmail.com]) mind providing some insight into the state of this 
precommit build? This java not found business doesn't look healthy.

{noformat}
[PreCommit-HBASE-Build] $ /bin/bash /tmp/hudson5672971179045787159.sh
asf002.sp2.ygridcore.net
Linux asf002.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul 20 
17:42:25 UTC 2011 x86_64 GNU/Linux
/tmp/hudson5672971179045787159.sh: line 6: java: command not found
/home/hudson/tools/java/latest1.7
/tmp/hudson5672971179045787159.sh: line 8: 
/home/hudson/tools/java/latest1.7/bin/java: No such file or directory
{noformat}

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.99.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch, 
 HBASE-9117.02.patch, HBASE-9117.03.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10284) Build broken with svn 1.8

2014-01-05 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-10284:
-

Assignee: Lars Hofhansl

 Build broken with svn 1.8
 -

 Key: HBASE-10284
 URL: https://issues.apache.org/jira/browse/HBASE-10284
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0

 Attachments: 10284.txt


 Just upgraded my machine and found that {{svn info}} displays a Relative 
 URL: line in svn 1.8.
 saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10284) Build broken with svn 1.8

2014-01-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862807#comment-13862807
 ] 

stack commented on HBASE-10284:
---

patch lgtm.

 Build broken with svn 1.8
 -

 Key: HBASE-10284
 URL: https://issues.apache.org/jira/browse/HBASE-10284
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0

 Attachments: 10284.txt


 Just upgraded my machine and found that {{svn info}} displays a Relative 
 URL: line in svn 1.8.
 saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10000) Initiate lease recovery for outstanding WAL files at the very beginning of recovery

2014-01-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-1:
---

Component/s: wal

 Initiate lease recovery for outstanding WAL files at the very beginning of 
 recovery
 ---

 Key: HBASE-1
 URL: https://issues.apache.org/jira/browse/HBASE-1
 Project: HBase
  Issue Type: Improvement
  Components: wal
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.1

 Attachments: 1-0.96-v5.txt, 1-0.96-v6.txt, 
 1-recover-ts-with-pb-2.txt, 1-recover-ts-with-pb-3.txt, 
 1-recover-ts-with-pb-4.txt, 1-recover-ts-with-pb-5.txt, 
 1-recover-ts-with-pb-6.txt, 1-recover-ts-with-pb-7.txt, 
 1-recover-ts-with-pb-8.txt, 1-recover-ts-with-pb-8.txt, 1-v4.txt, 
 1-v5.txt, 1-v6.txt


 At the beginning of recovery, master can send lease recovery requests 
 concurrently for outstanding WAL files using a thread pool.
 Each split worker would first check whether the WAL file it processes is 
 closed.
 Thanks to Nicolas Liochon and Jeffery discussion with whom gave rise to this 
 idea. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)