[jira] [Updated] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-5916:
--

Attachment: HBASE-5916_92.patch

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5916_92.patch, HBASE-5916_94.patch, 
 HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_2.patch, HBASE-5916_trunk_3.patch, HBASE-5916_trunk_4.patch, 
 HBASE-5916_trunk_v5.patch, HBASE-5916_trunk_v6.patch, 
 HBASE-5916_trunk_v7.patch, HBASE-5916_trunk_v8.patch, 
 HBASE-5916_trunk_v9.patch, HBASE-5916v8.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-29 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284661#comment-13284661
 ] 

rajeshbabu commented on HBASE-5916:
---

Patch for 92.

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5916_92.patch, HBASE-5916_94.patch, 
 HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_2.patch, HBASE-5916_trunk_3.patch, HBASE-5916_trunk_4.patch, 
 HBASE-5916_trunk_v5.patch, HBASE-5916_trunk_v6.patch, 
 HBASE-5916_trunk_v7.patch, HBASE-5916_trunk_v8.patch, 
 HBASE-5916_trunk_v9.patch, HBASE-5916v8.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6088:
--

Attachment: HBASE-6088_92.patch

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 

[jira] [Commented] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284662#comment-13284662
 ] 

Hadoop QA commented on HBASE-5916:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530015/HBASE-5916_92.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2028//console

This message is automatically generated.

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5916_92.patch, HBASE-5916_94.patch, 
 HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_2.patch, HBASE-5916_trunk_3.patch, HBASE-5916_trunk_4.patch, 
 HBASE-5916_trunk_v5.patch, HBASE-5916_trunk_v6.patch, 
 HBASE-5916_trunk_v7.patch, HBASE-5916_trunk_v8.patch, 
 HBASE-5916_trunk_v9.patch, HBASE-5916v8.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6088:
--

Attachment: HBASE-6088_94_3.patch

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 

[jira] [Updated] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6088:
--

Attachment: HBASE-6088_trunk_4.patch

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 

[jira] [Commented] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284664#comment-13284664
 ] 

rajeshbabu commented on HBASE-6088:
---

Updated patches as per Ted comments
Attached patch for 92

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   at 
 

[jira] [Commented] (HBASE-6083) Modify old filter tests to use Junit4/no longer use HBaseTestCase

2012-05-29 Thread Juhani Connolly (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284679#comment-13284679
 ] 

Juhani Connolly commented on HBASE-6083:


review board entry is here(I had assumed it would get linked automatically)

https://reviews.apache.org/r/5220/

 Modify old filter tests to use Junit4/no longer use HBaseTestCase
 -

 Key: HBASE-6083
 URL: https://issues.apache.org/jira/browse/HBASE-6083
 Project: HBase
  Issue Type: Improvement
Reporter: Juhani Connolly
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6065) Log for flush would append a non-sequential edit in the hlog, leading to possible data loss

2012-05-29 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284692#comment-13284692
 ] 

Lars Hofhansl commented on HBASE-6065:
--

Lastly, since this can lose data, I think this warrants a 0.94.1 release soon.

 Log for flush would append a non-sequential edit in the hlog, leading to 
 possible data loss
 ---

 Key: HBASE-6065
 URL: https://issues.apache.org/jira/browse/HBASE-6065
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6065.patch, HBASE-6065v2.patch


 After completing flush region, we will append a log edit in the hlog file 
 through HLog#completeCacheFlush.
 {code}
 public void completeCacheFlush(final byte [] encodedRegionName,
   final byte [] tableName, final long logSeqId, final boolean 
 isMetaRegion)
 {
 ...
 HLogKey key = makeKey(encodedRegionName, tableName, logSeqId,
 System.currentTimeMillis(), HConstants.DEFAULT_CLUSTER_ID);
 ...
 }
 {code}
 when we make the hlog key, we use the seqId from the parameter, and it is 
 generated by HLog#startCacheFlush,
 Here, we may append a lower seq id edit than the last edit in the hlog file.
 If it is the last edit log in the file, it may cause data loss.
 because 
 {code}
 HRegion#replayRecoveredEditsIfAny{
 ...
 maxSeqId = Math.abs(Long.parseLong(fileName));
   if (maxSeqId = minSeqId) {
 String msg = Maximum sequenceid for this log is  + maxSeqId
 +  and minimum sequenceid for the region is  + minSeqId
 + , skipped the whole file, path= + edits;
 LOG.debug(msg);
 continue;
   }
 ...
 }
 {code}
 We may skip the splitted log file, because we use the lase edit's seq id as 
 its file name, and consider this seqId as the max seq id in this log file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6065) Log for flush would append a non-sequential edit in the hlog, leading to possible data loss

2012-05-29 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284690#comment-13284690
 ] 

Lars Hofhansl commented on HBASE-6065:
--

Sorry for coming in late here. Was busy with HBaseCon, day after hack-a-thon 
and then traveling to Germany.

Looking back through this issue, I agree with Ram. We should never write edits 
non-sequentially into the HLogs (or ignore cache flush meta edit).
I imagine there is other code that scans through an HLog that might get 
confused about non sequential edits.


 Log for flush would append a non-sequential edit in the hlog, leading to 
 possible data loss
 ---

 Key: HBASE-6065
 URL: https://issues.apache.org/jira/browse/HBASE-6065
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6065.patch, HBASE-6065v2.patch


 After completing flush region, we will append a log edit in the hlog file 
 through HLog#completeCacheFlush.
 {code}
 public void completeCacheFlush(final byte [] encodedRegionName,
   final byte [] tableName, final long logSeqId, final boolean 
 isMetaRegion)
 {
 ...
 HLogKey key = makeKey(encodedRegionName, tableName, logSeqId,
 System.currentTimeMillis(), HConstants.DEFAULT_CLUSTER_ID);
 ...
 }
 {code}
 when we make the hlog key, we use the seqId from the parameter, and it is 
 generated by HLog#startCacheFlush,
 Here, we may append a lower seq id edit than the last edit in the hlog file.
 If it is the last edit log in the file, it may cause data loss.
 because 
 {code}
 HRegion#replayRecoveredEditsIfAny{
 ...
 maxSeqId = Math.abs(Long.parseLong(fileName));
   if (maxSeqId = minSeqId) {
 String msg = Maximum sequenceid for this log is  + maxSeqId
 +  and minimum sequenceid for the region is  + minSeqId
 + , skipped the whole file, path= + edits;
 LOG.debug(msg);
 continue;
   }
 ...
 }
 {code}
 We may skip the splitted log file, because we use the lase edit's seq id as 
 its file name, and consider this seqId as the max seq id in this log file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5974) Scanner retry behavior with RPC timeout on next() seems incorrect

2012-05-29 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-5974:
--

Attachment: HBASE-5974_94-V2.patch

Addressed the comments
The comment regarding the compatibility at client side, I have fixed with the 
string check on the RemoteException stacktrace. I am not happy with this way 
but there is no other way as of now I can find. Pls give your comments

 Scanner retry behavior with RPC timeout on next() seems incorrect
 -

 Key: HBASE-5974
 URL: https://issues.apache.org/jira/browse/HBASE-5974
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.90.7, 0.92.1, 0.94.0, 0.96.0
Reporter: Todd Lipcon
Priority: Critical
 Attachments: HBASE-5974_0.94.patch, HBASE-5974_94-V2.patch


 I'm seeing the following behavior:
 - set RPC timeout to a short value
 - call next() for some batch of rows, big enough so the client times out 
 before the result is returned
 - the HConnectionManager stuff will retry the next() call to the same server. 
 At this point, one of two things can happen: 1) the previous next() call will 
 still be processing, in which case you get a LeaseException, because it was 
 removed from the map during the processing, or 2) the next() call will 
 succeed but skip the prior batch of rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5974) Scanner retry behavior with RPC timeout on next() seems incorrect

2012-05-29 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5974:
-

Fix Version/s: 0.94.1

 Scanner retry behavior with RPC timeout on next() seems incorrect
 -

 Key: HBASE-5974
 URL: https://issues.apache.org/jira/browse/HBASE-5974
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.90.7, 0.92.1, 0.94.0, 0.96.0
Reporter: Todd Lipcon
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5974_0.94.patch, HBASE-5974_94-V2.patch


 I'm seeing the following behavior:
 - set RPC timeout to a short value
 - call next() for some batch of rows, big enough so the client times out 
 before the result is returned
 - the HConnectionManager stuff will retry the next() call to the same server. 
 At this point, one of two things can happen: 1) the previous next() call will 
 still be processing, in which case you get a LeaseException, because it was 
 removed from the map during the processing, or 2) the next() call will 
 succeed but skip the prior batch of rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5997) Fix concerns raised in HBASE-5922 related to HalfStoreFileReader

2012-05-29 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5997:
-

Fix Version/s: 0.94.1

 Fix concerns raised in HBASE-5922 related to HalfStoreFileReader
 

 Key: HBASE-5997
 URL: https://issues.apache.org/jira/browse/HBASE-5997
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6, 0.92.1, 0.94.0, 0.96.0
Reporter: ramkrishna.s.vasudevan
Assignee: Anoop Sam John
 Fix For: 0.94.1

 Attachments: HBASE-5997_0.94.patch, HBASE-5997_94 V2.patch, 
 Testcase.patch.txt


 Pls refer to the comment
 https://issues.apache.org/jira/browse/HBASE-5922?focusedCommentId=13269346page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13269346.
 Raised this issue to solve that comment. Just incase we don't forget it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6096) AccessController v2

2012-05-29 Thread Laxman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284807#comment-13284807
 ] 

Laxman commented on HBASE-6096:
---

Please validate my understanding and correct me.

1) Derived from HBASE-6061.
Table Owner + Table CREATE = Table ADMIN

2) Table ADMIN should be able to perform any operation on a table. (inline with 
GLOBAL ADMIN)
Table ADMIN = CREATE + READ + WRITE + ADMIN PERMISSIONS

ADMIN PERMISSIONS - Includes 

ADMIN should be able to perform operations like GET, SCAN, PUT and DELETE 
without explicit READ and WRITE permissions.

3) Table WRITE permission is NOT an inclusion of Table READ.

i.e., having WRITE permission alone indicates User is NOT authorized to do READ 
operations like GET/SCAN.



 AccessController v2
 ---

 Key: HBASE-6096
 URL: https://issues.apache.org/jira/browse/HBASE-6096
 Project: HBase
  Issue Type: Umbrella
  Components: security
Affects Versions: 0.96.0, 0.94.1
Reporter: Andrew Purtell

 Umbrella issue for iteration on the initial AccessController drop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284817#comment-13284817
 ] 

Zhihong Yu commented on HBASE-6088:
---

Minor comment:
{code}
+   * This test case to test the znode is deleted(if created) or not in roll 
back.
{code}
'case to test' - 'case is to test'

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 

[jira] [Commented] (HBASE-6109) Improve RIT performances during assignment on large clusters

2012-05-29 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284843#comment-13284843
 ] 

nkeywal commented on HBASE-6109:


@stack

bq. Is this a generic locker? Should it be named for what its locking?
Renamed to LockerByString. If you have a better name...

bq. NotifiableConcurrentSkipListMap needs class comment. It seems like its for 
use in a very particular circumstance. It needs explaining.
done.

bq. Does it need to be public? Only used in master package? Perhaps make it 
package private then?

The issue was:
{noformat}
  public NotifiableConcurrentSkipListMapString, RegionState 
getRegionsInTransition() {
return regionsInTransition;
  }
{noformat}

But it's used in tests only, so I can actually make both package protected. 
Done.

bq. internalList is a bad name for the internal delegate instance. Is 
'delegatee' a better name than internalList?
done.


bq. We checked rit contains a name but then in a separate statement we do the 
waitForListUpdate? What if the region we are looking for is removed between the 
check and the waitForListUpdate invocation?
Actually yes, it could happen. I added a timeout, so we will now check every 
100ms.


bq. Will this log be annoying?
Removed. I added them while debugging.

This one was already there however. I kept it.
{noformat}
  public void removeClosedRegion(HRegionInfo hri) {
if (regionsToReopen.remove(hri.getEncodedName()) != null) {
  LOG.debug(Removed region from reopening regions because it was closed);
}
  }
{noformat}


bq. Is this true / How is it enforced?
Oops, it not enforced (I don't know I could do it), but it's also not true: the 
update will set it as well. But it's not an issue as it's an atomic long. 
Comment updated.
It's btw tempting to:
 - change the implementation of updateTimestampToNow to use a lazySet
 - get the timestamp only once before looping on the region set.

I didn't do it in my patch, but I think it should be done. 

bq. needs space after curly parens. Sometimes you do it and sometimes you don't.
Done



 @ted

bq. It would be nice to have a test for NotifiableConcurrentSkipListMap.
Will do for final release.

bq. Since internalList is actually a Map, name the above method waitForUpdate() 
?
Done.

bq. the above should read 'A utility class to manage a set of locks. Each lock 
is identified by a String which serves'
Done

bq. It should be Locker.class
Done

bq. The constant should be named NB_CONCURRENT_LOCKS.
Done

bq.The last word should be locked.
Done

bq. It would be nice to add more about reason.
Done.

bq. Looking at batchRemove() of 
http://www.docjar.com/html/api/java/util/ArrayList.java.html around line 669, I 
don't see synchronization. Meaning, existence check of elements from nodes in 
regionsInTransition.keySet() may not be deterministic.

After looking at the java api code, I don't think there is an issue here. The 
set we're using is documented as: The view's iterator is a weakly consistent 
iterator that will never throw ConcurrentModificationException, and guarantees 
to traverse elements as they existed upon construction of the iterator, and may 
(but is not guaranteed to) reflect any modifications subsequent to 
construction.. So we won't have any java error. Then, if an element is 
added/removed to/from the RIT while we're doing the removeAll, it may be 
added/removed or not, but we're not less deterministic that we would be by 
adding a lock around the removeAll: the add/remove could be as well be done 
just before/after we take the lock, and we would not know it.



I'm currently checking how it works with split, then I will update it to the 
current trunk.

 Improve RIT performances during assignment on large clusters
 

 Key: HBASE-6109
 URL: https://issues.apache.org/jira/browse/HBASE-6109
 Project: HBase
  Issue Type: Improvement
  Components: master
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Attachments: 6109.v7.patch


 The main points in this patch are:
  - lowering the number of copy of the RIT list
  - lowering the number of synchronization
  - synchronizing on a region rather than on everything
 It also contains:
  - some fixes around the RIT notification: the list was sometimes modified 
 without a corresponding 'notify'.
  - some tests flakiness correction, actually unrelated to this patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6089:
--

Attachment: HBASE-6089_92.patch

 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6089:
--

Attachment: HBASE-6089_94.patch

 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6089:
--

Attachment: HBASE-6089_trunk.patch

 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6089:
--

Status: Patch Available  (was: Open)

 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0, 0.92.1
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284877#comment-13284877
 ] 

rajeshbabu commented on HBASE-6089:
---

In 94 and trunk patches,along with fix,removed params in javadoc of modified 
methods as part of HBASE-5916 and dead code in AssignmentManager.
{code}
void unassignCatalogRegions(){
this.servers.entrySet();
{code}




 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6121) [89-fb] Fix TaskMonitor/MultiPut multithreading bug

2012-05-29 Thread Amir Shimoni (JIRA)
Amir Shimoni created HBASE-6121:
---

 Summary: [89-fb] Fix TaskMonitor/MultiPut multithreading bug
 Key: HBASE-6121
 URL: https://issues.apache.org/jira/browse/HBASE-6121
 Project: HBase
  Issue Type: Bug
Reporter: Amir Shimoni
Assignee: Amir Shimoni
Priority: Minor


We shouldn't clear an ArrayList that might be iterated on by another thread.

Specifically, multiput() calls clear() on ArrayList (to free up some memory) 
while MultiPut.toMap is iterating over that ArrayList in a different thread 
(called from MonitorTasks UI)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284895#comment-13284895
 ] 

Zhihong Yu commented on HBASE-6089:
---

Pstch for trunk looks good.

 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5974) Scanner retry behavior with RPC timeout on next() seems incorrect

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284904#comment-13284904
 ] 

Zhihong Yu commented on HBASE-5974:
---

Should CallSequenceOutOfOrderException be extending DoNotRetryIOException ?
That way you don't need to create a new exception below:
{code}
+} else if (ioe instanceof CallSequenceOutOfOrderException) {
...
+  throw new DoNotRetryIOException(Reset scanner, ioe);
{code}
I think users haven't experienced this bug.
In solving the bug, some kludge is introduced.
We should think twice before integration.

 Scanner retry behavior with RPC timeout on next() seems incorrect
 -

 Key: HBASE-5974
 URL: https://issues.apache.org/jira/browse/HBASE-5974
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.90.7, 0.92.1, 0.94.0, 0.96.0
Reporter: Todd Lipcon
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5974_0.94.patch, HBASE-5974_94-V2.patch


 I'm seeing the following behavior:
 - set RPC timeout to a short value
 - call next() for some batch of rows, big enough so the client times out 
 before the result is returned
 - the HConnectionManager stuff will retry the next() call to the same server. 
 At this point, one of two things can happen: 1) the previous next() call will 
 still be processing, in which case you get a LeaseException, because it was 
 removed from the map during the processing, or 2) the next() call will 
 succeed but skip the prior batch of rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6088:
--

   Resolution: Fixed
Fix Version/s: 0.92.2
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.92, 0.94 and trunk.
Thanks for the patch Rajesh.
Thanks for the review Ted.

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 

[jira] [Commented] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284907#comment-13284907
 ] 

ramkrishna.s.vasudevan commented on HBASE-6088:
---

@Ted
The committed patch addresses your last comment.

  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 java.io.IOException: Failed create of ephemeral 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:865)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   at 
 

[jira] [Resolved] (HBASE-6115) NullPointerException is thrown when root and meta table regions are assigning to another RS.

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan resolved HBASE-6115.
---

  Resolution: Fixed
Assignee: ramkrishna.s.vasudevan  (was: rajeshbabu)
Hadoop Flags: Reviewed

Committed to 0.94.
Thanks for the review Stack and Ted.

 NullPointerException is thrown when root and meta table regions are assigning 
 to another RS.
 

 Key: HBASE-6115
 URL: https://issues.apache.org/jira/browse/HBASE-6115
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: rajeshbabu
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.94.1

 Attachments: HBASE-6115_0.94.patch


 Lets suppose we have two region servers RS1 and RS2.
 If region server (RS1) having root and meta regions went down, master will 
 assign them to another region server RS2. At that time recieved 
 NullPointerException.
 {code}
 2012-05-04 20:19:52,912 DEBUG 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
 Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@25de152f;
  serverName=
 2012-05-04 20:19:52,914 DEBUG 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
 Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@25de152f;
  serverName=
 2012-05-04 20:19:52,916 WARN 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Exception 
 running postOpenDeployTasks; region=1028785192
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1483)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1367)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:945)
 at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:801)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:776)
 at org.apache.hadoop.hbase.catalog.MetaEditor.put(MetaEditor.java:98)
 at 
 org.apache.hadoop.hbase.catalog.MetaEditor.putToCatalogTable(MetaEditor.java:88)
 at 
 org.apache.hadoop.hbase.catalog.MetaEditor.updateLocation(MetaEditor.java:259)
 at 
 org.apache.hadoop.hbase.catalog.MetaEditor.updateMetaLocation(MetaEditor.java:221)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1625)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:241)
 2012-05-04 20:19:52,920 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
 Closing .META.,,1.1028785192: disabling compactions  flushes
 2012-05-04 20:19:52,920 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
 Updates disabled for region .META.,,1.1028785192
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-5916:
--

   Resolution: Fixed
Fix Version/s: 0.92.2
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.92 also.
Hence resolving it.

 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5916_92.patch, HBASE-5916_94.patch, 
 HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_2.patch, HBASE-5916_trunk_3.patch, HBASE-5916_trunk_4.patch, 
 HBASE-5916_trunk_v5.patch, HBASE-5916_trunk_v6.patch, 
 HBASE-5916_trunk_v7.patch, HBASE-5916_trunk_v8.patch, 
 HBASE-5916_trunk_v9.patch, HBASE-5916v8.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5974) Scanner retry behavior with RPC timeout on next() seems incorrect

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reassigned HBASE-5974:
-

Assignee: Anoop Sam John

 Scanner retry behavior with RPC timeout on next() seems incorrect
 -

 Key: HBASE-5974
 URL: https://issues.apache.org/jira/browse/HBASE-5974
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.90.7, 0.92.1, 0.94.0, 0.96.0
Reporter: Todd Lipcon
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5974_0.94.patch, HBASE-5974_94-V2.patch


 I'm seeing the following behavior:
 - set RPC timeout to a short value
 - call next() for some batch of rows, big enough so the client times out 
 before the result is returned
 - the HConnectionManager stuff will retry the next() call to the same server. 
 At this point, one of two things can happen: 1) the previous next() call will 
 still be processing, in which case you get a LeaseException, because it was 
 removed from the map during the processing, or 2) the next() call will 
 succeed but skip the prior batch of rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6095) ActiveMasterManager NullPointerException

2012-05-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6095:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Applied 0.92 and 0.94 branches.  Thanks for patch Jimmy.

 ActiveMasterManager NullPointerException
 

 Key: HBASE-6095
 URL: https://issues.apache.org/jira/browse/HBASE-6095
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.94.1

 Attachments: hbase-6095.patch


 It is for 0.94 and 0.92.  Trunk doesn't have the issue.
 {code}
   byte [] bytes =
 ZKUtil.getDataAndWatch(watcher, watcher.masterAddressZNode);
   // TODO: redo this to make it atomic (only added for tests)
   ServerName master = ServerName.parseVersionedServerName(bytes);
 {code}
 bytes could be null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Work started] (HBASE-6107) Distributed log splitting hangs even there is no task under /hbase/splitlog

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-6107 started by Jimmy Xiang.

 Distributed log splitting hangs even there is no task under /hbase/splitlog
 ---

 Key: HBASE-6107
 URL: https://issues.apache.org/jira/browse/HBASE-6107
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-6107.patch, hbase-6107_v3-new.patch, 
 hbase_6107_v2.patch, hbase_6107_v3.patch


 Sometimes, master web UI shows the distributed log splitting is going on, 
 waiting for one last task to be done.  However, in ZK, there is no task under 
 /hbase/splitlog at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6107) Distributed log splitting hangs even there is no task under /hbase/splitlog

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6107:
---

Status: Open  (was: Patch Available)

 Distributed log splitting hangs even there is no task under /hbase/splitlog
 ---

 Key: HBASE-6107
 URL: https://issues.apache.org/jira/browse/HBASE-6107
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-6107.patch, hbase-6107_v3-new.patch, 
 hbase_6107_v2.patch, hbase_6107_v3.patch


 Sometimes, master web UI shows the distributed log splitting is going on, 
 waiting for one last task to be done.  However, in ZK, there is no task under 
 /hbase/splitlog at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6107) Distributed log splitting hangs even there is no task under /hbase/splitlog

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6107:
---

Status: Patch Available  (was: In Progress)

try hadoopqa again.

 Distributed log splitting hangs even there is no task under /hbase/splitlog
 ---

 Key: HBASE-6107
 URL: https://issues.apache.org/jira/browse/HBASE-6107
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-6107.patch, hbase-6107_v3-new.patch, 
 hbase_6107_v2.patch, hbase_6107_v3.patch


 Sometimes, master web UI shows the distributed log splitting is going on, 
 waiting for one last task to be done.  However, in ZK, there is no task under 
 /hbase/splitlog at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6107) Distributed log splitting hangs even there is no task under /hbase/splitlog

2012-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284927#comment-13284927
 ] 

stack commented on HBASE-6107:
--

+1 on patch.  Will wait on hadoopqa before committing.

 Distributed log splitting hangs even there is no task under /hbase/splitlog
 ---

 Key: HBASE-6107
 URL: https://issues.apache.org/jira/browse/HBASE-6107
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-6107.patch, hbase-6107_v3-new.patch, 
 hbase_6107_v2.patch, hbase_6107_v3.patch


 Sometimes, master web UI shows the distributed log splitting is going on, 
 waiting for one last task to be done.  However, in ZK, there is no task under 
 /hbase/splitlog at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6122) Backup master does not become Active master after ZK exception

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-6122:
-

 Summary: Backup master does not become Active master after ZK 
exception
 Key: HBASE-6122
 URL: https://issues.apache.org/jira/browse/HBASE-6122
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.96.0, 0.94.1


- Active master gets ZK expiry exception.
- Backup master becomes active.
- The previous active master retries and becomes the back up master.
Now when the new active master goes down and the current back up master comes 
up, it goes down again with the zk expiry exception it got in the first step.

{code}
if (abortNow(msg, t)) {
  if (t != null) LOG.fatal(msg, t);
  else LOG.fatal(msg);
  this.abort = true;
  stop(Aborting);
}
{code}
In ActiveMasterManager.blockUntilBecomingActiveMaster we try to wait till the 
back up master becomes active. 
{code}
synchronized (this.clusterHasActiveMaster) {
  while (this.clusterHasActiveMaster.get()  !this.master.isStopped()) {
try {
  this.clusterHasActiveMaster.wait();
} catch (InterruptedException e) {
  // We expect to be interrupted when a master dies, will fall out if so
  LOG.debug(Interrupted waiting for master to die, e);
}
  }
  if (!clusterStatusTracker.isClusterUp()) {
this.master.stop(Cluster went down before this master became active);
  }
  if (this.master.isStopped()) {
return cleanSetOfActiveMaster;
  }
  // Try to become active master again now that there is no active master
  blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
}
return cleanSetOfActiveMaster;
{code}
When the back up master (it is in back up mode as he got ZK exception), once 
again tries to come to active we don't get the return value that comes out from 
{code}
// Try to become active master again now that there is no active master
  blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
{code}
We tend to return the 'cleanSetOfActiveMaster' which was previously false.
Now because of this instead of again becoming active the back up master goes 
down in the abort() code.  Thanks to Gopi,my colleague for reporting this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6108) Use HRegion.closeHRegion instead of HRegion.close() and HRegion.getLog().close()

2012-05-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6108:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks for the patch Gregory.

 Use HRegion.closeHRegion instead of HRegion.close() and 
 HRegion.getLog().close()
 

 Key: HBASE-6108
 URL: https://issues.apache.org/jira/browse/HBASE-6108
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-6108.patch


 There are a bunch of places in the code like this:
 region.close();
 region.getLog().closeAndDelete();
 Instead of the better:
 HRegion.closeHRegion(region);
 We should change these for a few reasons:
 1) If we ever need to change the implementation, it's easier to change in one 
 place
 2) closeHRegion properly checks for nulls.  There are a few places where this 
 could make a difference, for example in TestOpenedRegionHandler.java it's 
 possible that an exception can be thrown before region is assigned and thus 
 region.close() could throw an NPE.  closeHRegion avoids this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6122) Backup master does not become Active master after ZK exception

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6122:
--

Fix Version/s: 0.92.2

 Backup master does not become Active master after ZK exception
 --

 Key: HBASE-6122
 URL: https://issues.apache.org/jira/browse/HBASE-6122
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.92.2, 0.96.0, 0.94.1


 - Active master gets ZK expiry exception.
 - Backup master becomes active.
 - The previous active master retries and becomes the back up master.
 Now when the new active master goes down and the current back up master comes 
 up, it goes down again with the zk expiry exception it got in the first step.
 {code}
 if (abortNow(msg, t)) {
   if (t != null) LOG.fatal(msg, t);
   else LOG.fatal(msg);
   this.abort = true;
   stop(Aborting);
 }
 {code}
 In ActiveMasterManager.blockUntilBecomingActiveMaster we try to wait till the 
 back up master becomes active. 
 {code}
 synchronized (this.clusterHasActiveMaster) {
   while (this.clusterHasActiveMaster.get()  !this.master.isStopped()) {
 try {
   this.clusterHasActiveMaster.wait();
 } catch (InterruptedException e) {
   // We expect to be interrupted when a master dies, will fall out if 
 so
   LOG.debug(Interrupted waiting for master to die, e);
 }
   }
   if (!clusterStatusTracker.isClusterUp()) {
 this.master.stop(Cluster went down before this master became 
 active);
   }
   if (this.master.isStopped()) {
 return cleanSetOfActiveMaster;
   }
   // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 }
 return cleanSetOfActiveMaster;
 {code}
 When the back up master (it is in back up mode as he got ZK exception), once 
 again tries to come to active we don't get the return value that comes out 
 from 
 {code}
 // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 {code}
 We tend to return the 'cleanSetOfActiveMaster' which was previously false.
 Now because of this instead of again becoming active the back up master goes 
 down in the abort() code.  Thanks to Gopi,my colleague for reporting this 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6083) Modify old filter tests to use Junit4/no longer use HBaseTestCase

2012-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284942#comment-13284942
 ] 

stack commented on HBASE-6083:
--

Here is what I put up in rb in case it doesn't make it over here:

bq. Looks great.   Please attach to the issue so can commit.  I think rb did 
not update the issue because jira was down for a good while (I think that the 
cause -- it looks like you set the right fields in rb).


 Modify old filter tests to use Junit4/no longer use HBaseTestCase
 -

 Key: HBASE-6083
 URL: https://issues.apache.org/jira/browse/HBASE-6083
 Project: HBase
  Issue Type: Improvement
Reporter: Juhani Connolly
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6122) Backup master does not become Active master after ZK exception

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6122:
--

Attachment: HBASE-6122_0.94.patch

 Backup master does not become Active master after ZK exception
 --

 Key: HBASE-6122
 URL: https://issues.apache.org/jira/browse/HBASE-6122
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6122_0.94.patch


 - Active master gets ZK expiry exception.
 - Backup master becomes active.
 - The previous active master retries and becomes the back up master.
 Now when the new active master goes down and the current back up master comes 
 up, it goes down again with the zk expiry exception it got in the first step.
 {code}
 if (abortNow(msg, t)) {
   if (t != null) LOG.fatal(msg, t);
   else LOG.fatal(msg);
   this.abort = true;
   stop(Aborting);
 }
 {code}
 In ActiveMasterManager.blockUntilBecomingActiveMaster we try to wait till the 
 back up master becomes active. 
 {code}
 synchronized (this.clusterHasActiveMaster) {
   while (this.clusterHasActiveMaster.get()  !this.master.isStopped()) {
 try {
   this.clusterHasActiveMaster.wait();
 } catch (InterruptedException e) {
   // We expect to be interrupted when a master dies, will fall out if 
 so
   LOG.debug(Interrupted waiting for master to die, e);
 }
   }
   if (!clusterStatusTracker.isClusterUp()) {
 this.master.stop(Cluster went down before this master became 
 active);
   }
   if (this.master.isStopped()) {
 return cleanSetOfActiveMaster;
   }
   // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 }
 return cleanSetOfActiveMaster;
 {code}
 When the back up master (it is in back up mode as he got ZK exception), once 
 again tries to come to active we don't get the return value that comes out 
 from 
 {code}
 // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 {code}
 We tend to return the 'cleanSetOfActiveMaster' which was previously false.
 Now because of this instead of again becoming active the back up master goes 
 down in the abort() code.  Thanks to Gopi,my colleague for reporting this 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6122) Backup master does not become Active master after ZK exception

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-6122:
--

Attachment: HBASE-6122_0.92.patch

I found some changes in the trunk code.  So not sure if it is applicable in 
trunk.  Attached patches for 0.94 and 0.92.

 Backup master does not become Active master after ZK exception
 --

 Key: HBASE-6122
 URL: https://issues.apache.org/jira/browse/HBASE-6122
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6122_0.92.patch, HBASE-6122_0.94.patch


 - Active master gets ZK expiry exception.
 - Backup master becomes active.
 - The previous active master retries and becomes the back up master.
 Now when the new active master goes down and the current back up master comes 
 up, it goes down again with the zk expiry exception it got in the first step.
 {code}
 if (abortNow(msg, t)) {
   if (t != null) LOG.fatal(msg, t);
   else LOG.fatal(msg);
   this.abort = true;
   stop(Aborting);
 }
 {code}
 In ActiveMasterManager.blockUntilBecomingActiveMaster we try to wait till the 
 back up master becomes active. 
 {code}
 synchronized (this.clusterHasActiveMaster) {
   while (this.clusterHasActiveMaster.get()  !this.master.isStopped()) {
 try {
   this.clusterHasActiveMaster.wait();
 } catch (InterruptedException e) {
   // We expect to be interrupted when a master dies, will fall out if 
 so
   LOG.debug(Interrupted waiting for master to die, e);
 }
   }
   if (!clusterStatusTracker.isClusterUp()) {
 this.master.stop(Cluster went down before this master became 
 active);
   }
   if (this.master.isStopped()) {
 return cleanSetOfActiveMaster;
   }
   // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 }
 return cleanSetOfActiveMaster;
 {code}
 When the back up master (it is in back up mode as he got ZK exception), once 
 again tries to come to active we don't get the return value that comes out 
 from 
 {code}
 // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 {code}
 We tend to return the 'cleanSetOfActiveMaster' which was previously false.
 Now because of this instead of again becoming active the back up master goes 
 down in the abort() code.  Thanks to Gopi,my colleague for reporting this 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-2396) Coprocessors: Server side dynamic scripting language execution

2012-05-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284955#comment-13284955
 ] 

Andrew Purtell commented on HBASE-2396:
---

Coprocessors provide two extension surfaces, Observers (triggers) and Endpoints 
(stored procedures). We can provide access to both in a first cut via a system 
coprocessor that manages scriptlet execution. Consider:

* Ruby embedding by default, since the JRuby jar is already available.

* JavaScript embedding, since this will be a very popular request if it is not 
available as an option, and since packaging Rhino into the scripting 
coprocessor artifact with Maven should be easy enough.

* Support storing scriptlets for trigger-style execution at table, 
column[:qualifier], or row scope. 
** User should be able to specify if the scriptlet should run at read time or 
write time or both.
** Store scriptlet state in a metacolumn, similar to HBASE-2893, but privately 
managed to punt on issues of cross coprocessor dependencies and API invocation.
** The scriptlet execution host can wrap every Get or Scan with a custom filter 
that transforms or generates values according to entries in the metacolumn 
scanned internally at setup time. Implies that wherever the user specifies the 
location of a generator instead of a real value we must still store a 
placeholder.
** We also need to consider how this wrapper will interact with the 
AccessController's RegionScanner wrapper: Because the AccessController is first 
in any CP chain by priority it will already be filtering out placeholders the 
current subject doesn't have read or write access to, but how to handle EXEC 
permission may need some thought.

* Restrict scriptlets as observers to DML operations.
** We can expose a callback interface in the scripting environment on region 
operations with a small and familiar Document Object Model. Set up the DOM in 
the scripting environment(s) when the scriptlet host initializes. Call up into 
the DOM from Observer hooks at the Java level. See [JRuby 
embedding|https://github.com/jruby/jruby/wiki/RedBridge] and [Rhino 
embedding|http://www.mozilla.org/rhino/tutorial.html].


* Provide the Endpoint interface Stack mentioned in the above comment.
** The first cut Exec API could be {{String execute(String language, String 
script)}}

 Coprocessors: Server side dynamic scripting language execution
 --

 Key: HBASE-2396
 URL: https://issues.apache.org/jira/browse/HBASE-2396
 Project: HBase
  Issue Type: New Feature
  Components: coprocessors
Reporter: Todd Lipcon
Assignee: Andrew Purtell

 There are a lot of use cases where users want to perform some simple 
 operations on the region server. For example, a row may represent a Set and 
 users want append/search/remove style operations within the row without 
 having to perform the work on the client side. One possible solution is to 
 embed a small language something like PL/SQL (not necessarily in syntax) 
 which restricts users to a safe set of operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5974) Scanner retry behavior with RPC timeout on next() seems incorrect

2012-05-29 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284957#comment-13284957
 ] 

Lars Hofhansl commented on HBASE-5974:
--

Last patch looks good to me. The string check on the RemoteException is not 
ideal, but I cannot think of anything better; somebody with more knowledge 
about our RPC should chime in.
Is the RegionScannerHolder needed? Why can't RegionScannerImpl not hold the 
sequence number and RegionScanner get a get/setSeq method?


 Scanner retry behavior with RPC timeout on next() seems incorrect
 -

 Key: HBASE-5974
 URL: https://issues.apache.org/jira/browse/HBASE-5974
 Project: HBase
  Issue Type: Bug
  Components: client, regionserver
Affects Versions: 0.90.7, 0.92.1, 0.94.0, 0.96.0
Reporter: Todd Lipcon
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.94.1

 Attachments: HBASE-5974_0.94.patch, HBASE-5974_94-V2.patch


 I'm seeing the following behavior:
 - set RPC timeout to a short value
 - call next() for some batch of rows, big enough so the client times out 
 before the result is returned
 - the HConnectionManager stuff will retry the next() call to the same server. 
 At this point, one of two things can happen: 1) the previous next() call will 
 still be processing, in which case you get a LeaseException, because it was 
 removed from the map during the processing, or 2) the next() call will 
 succeed but skip the prior batch of rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5161) Compaction algorithm should prioritize reference files

2012-05-29 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5161:
-


This is an old existing issue with no patch available. Unassigning from 0.94 
for now.

 Compaction algorithm should prioritize reference files
 --

 Key: HBASE-5161
 URL: https://issues.apache.org/jira/browse/HBASE-5161
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Priority: Critical
 Fix For: 0.92.1


 I got myself into a state where my table was un-splittable as long as the 
 insert load was coming in. Emergency flushes because of the low memory 
 barrier don't check the number of store files so it never blocks, to a point 
 where I had in one case 45 store files and the compactions were almost never 
 done on the reference files (had 15 of them, went down by one in 20 minutes). 
 Since you can't split regions with reference files, that region couldn't 
 split and was doomed to just get more store files until the load stopped.
 Marking this as a minor issue, what we really need is a better pushback 
 mechanism but not prioritizing reference files seems wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5161) Compaction algorithm should prioritize reference files

2012-05-29 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-5161:
-

Fix Version/s: (was: 0.94.1)

 Compaction algorithm should prioritize reference files
 --

 Key: HBASE-5161
 URL: https://issues.apache.org/jira/browse/HBASE-5161
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
Priority: Critical
 Fix For: 0.92.1


 I got myself into a state where my table was un-splittable as long as the 
 insert load was coming in. Emergency flushes because of the low memory 
 barrier don't check the number of store files so it never blocks, to a point 
 where I had in one case 45 store files and the compactions were almost never 
 done on the reference files (had 15 of them, went down by one in 20 minutes). 
 Since you can't split regions with reference files, that region couldn't 
 split and was doomed to just get more store files until the load stopped.
 Marking this as a minor issue, what we really need is a better pushback 
 mechanism but not prioritizing reference files seems wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284960#comment-13284960
 ] 

Hudson commented on HBASE-6088:
---

Integrated in HBase-TRUNK #2943 (See 
[https://builds.apache.org/job/HBase-TRUNK/2943/])
HBASE-6088 Region splitting not happened for long time due to ZK exception 
while creating RS_ZK_SPLITTING node (Rajesh) (Revision 1343817)

 Result = SUCCESS
ramkrishna : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of 

[jira] [Commented] (HBASE-6122) Backup master does not become Active master after ZK exception

2012-05-29 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284961#comment-13284961
 ] 

Lars Hofhansl commented on HBASE-6122:
--

+1 patch looks good to me.

 Backup master does not become Active master after ZK exception
 --

 Key: HBASE-6122
 URL: https://issues.apache.org/jira/browse/HBASE-6122
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6122_0.92.patch, HBASE-6122_0.94.patch


 - Active master gets ZK expiry exception.
 - Backup master becomes active.
 - The previous active master retries and becomes the back up master.
 Now when the new active master goes down and the current back up master comes 
 up, it goes down again with the zk expiry exception it got in the first step.
 {code}
 if (abortNow(msg, t)) {
   if (t != null) LOG.fatal(msg, t);
   else LOG.fatal(msg);
   this.abort = true;
   stop(Aborting);
 }
 {code}
 In ActiveMasterManager.blockUntilBecomingActiveMaster we try to wait till the 
 back up master becomes active. 
 {code}
 synchronized (this.clusterHasActiveMaster) {
   while (this.clusterHasActiveMaster.get()  !this.master.isStopped()) {
 try {
   this.clusterHasActiveMaster.wait();
 } catch (InterruptedException e) {
   // We expect to be interrupted when a master dies, will fall out if 
 so
   LOG.debug(Interrupted waiting for master to die, e);
 }
   }
   if (!clusterStatusTracker.isClusterUp()) {
 this.master.stop(Cluster went down before this master became 
 active);
   }
   if (this.master.isStopped()) {
 return cleanSetOfActiveMaster;
   }
   // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 }
 return cleanSetOfActiveMaster;
 {code}
 When the back up master (it is in back up mode as he got ZK exception), once 
 again tries to come to active we don't get the return value that comes out 
 from 
 {code}
 // Try to become active master again now that there is no active master
   blockUntilBecomingActiveMaster(startupStatus,clusterStatusTracker);
 {code}
 We tend to return the 'cleanSetOfActiveMaster' which was previously false.
 Now because of this instead of again becoming active the back up master goes 
 down in the abort() code.  Thanks to Gopi,my colleague for reporting this 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5955) Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it

2012-05-29 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284966#comment-13284966
 ] 

Lars Hofhansl commented on HBASE-5955:
--

Will this have any negative side effects when using older versions of Hadoop?
I.e. is it 100% safe for 0.94?

 Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it
 -

 Key: HBASE-5955
 URL: https://issues.apache.org/jira/browse/HBASE-5955
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.94.1


 Hadoop 2.0.0-alpha depends on Guava 11.0.2. Updating HBase dependencies to 
 match produces the following compilation errors:
 {code}
 [ERROR] SingleSizeCache.java:[41,32] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: package com.google.common.collect
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,4] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,69] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284971#comment-13284971
 ] 

Hudson commented on HBASE-6088:
---

Integrated in HBase-0.94 #223 (See 
[https://builds.apache.org/job/HBase-0.94/223/])
HBASE-6088 Region splitting not happened for long time due to ZK exception 
while creating RS_ZK_SPLITTING node (Rajesh) (Revision 1343818)

 Result = FAILURE
ramkrishna : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral 

[jira] [Commented] (HBASE-6115) NullPointerException is thrown when root and meta table regions are assigning to another RS.

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284972#comment-13284972
 ] 

Hudson commented on HBASE-6115:
---

Integrated in HBase-0.94 #223 (See 
[https://builds.apache.org/job/HBase-0.94/223/])
HBASE-6115 NullPointerException is thrown when root and meta table regions 
are assigning to another RS. (Ram) (Revision 1343820)

 Result = FAILURE
ramkrishna : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java


 NullPointerException is thrown when root and meta table regions are assigning 
 to another RS.
 

 Key: HBASE-6115
 URL: https://issues.apache.org/jira/browse/HBASE-6115
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: rajeshbabu
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 0.94.1

 Attachments: HBASE-6115_0.94.patch


 Lets suppose we have two region servers RS1 and RS2.
 If region server (RS1) having root and meta regions went down, master will 
 assign them to another region server RS2. At that time recieved 
 NullPointerException.
 {code}
 2012-05-04 20:19:52,912 DEBUG 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
 Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@25de152f;
  serverName=
 2012-05-04 20:19:52,914 DEBUG 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: 
 Looked up root region location, 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@25de152f;
  serverName=
 2012-05-04 20:19:52,916 WARN 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Exception 
 running postOpenDeployTasks; region=1028785192
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1483)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1367)
 at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:945)
 at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:801)
 at org.apache.hadoop.hbase.client.HTable.put(HTable.java:776)
 at org.apache.hadoop.hbase.catalog.MetaEditor.put(MetaEditor.java:98)
 at 
 org.apache.hadoop.hbase.catalog.MetaEditor.putToCatalogTable(MetaEditor.java:88)
 at 
 org.apache.hadoop.hbase.catalog.MetaEditor.updateLocation(MetaEditor.java:259)
 at 
 org.apache.hadoop.hbase.catalog.MetaEditor.updateMetaLocation(MetaEditor.java:221)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1625)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:241)
 2012-05-04 20:19:52,920 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
 Closing .META.,,1.1028785192: disabling compactions  flushes
 2012-05-04 20:19:52,920 DEBUG org.apache.hadoop.hbase.regionserver.HRegion: 
 Updates disabled for region .META.,,1.1028785192
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6095) ActiveMasterManager NullPointerException

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284970#comment-13284970
 ] 

Hudson commented on HBASE-6095:
---

Integrated in HBase-0.94 #223 (See 
[https://builds.apache.org/job/HBase-0.94/223/])
HBASE-6095 ActiveMasterManager NullPointerException (Revision 1343838)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java


 ActiveMasterManager NullPointerException
 

 Key: HBASE-6095
 URL: https://issues.apache.org/jira/browse/HBASE-6095
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.94.1

 Attachments: hbase-6095.patch


 It is for 0.94 and 0.92.  Trunk doesn't have the issue.
 {code}
   byte [] bytes =
 ZKUtil.getDataAndWatch(watcher, watcher.masterAddressZNode);
   // TODO: redo this to make it atomic (only added for tests)
   ServerName master = ServerName.parseVersionedServerName(bytes);
 {code}
 bytes could be null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284973#comment-13284973
 ] 

stack commented on HBASE-6089:
--

Patch looks good to me.

Rather than synchronize, why not use a concurrentskiplistmap?  Also, for sure 
we have synchronized all accesses to this.region.  What about its tie to 
this.servers.  That is still respected by this patch?

 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)
Zhihong Yu created HBASE-6123:
-

 Summary: dev-support/test-patch.sh should compile against hadoop 
2.0.0-alpha instead of hadoop 0.23
 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu


test-patch.sh currently does this:
{code}
  $MVN clean test -DskipTests -Dhadoop.profile=23 -D${PROJECT_NAME}PatchProcess 
 $PATCH_DIR/trunk23JavacWarnings.txt 21
{code}
we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5955) Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it

2012-05-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284976#comment-13284976
 ] 

Andrew Purtell commented on HBASE-5955:
---

Those of us who want to run 0.94 on Hadoop 2.0.x can carry around a private 
patch if you want to be 100% safe Lars. 

 Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it
 -

 Key: HBASE-5955
 URL: https://issues.apache.org/jira/browse/HBASE-5955
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.94.1


 Hadoop 2.0.0-alpha depends on Guava 11.0.2. Updating HBase dependencies to 
 match produces the following compilation errors:
 {code}
 [ERROR] SingleSizeCache.java:[41,32] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: package com.google.common.collect
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,4] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,69] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6036) Add Cluster-level PB-based calls to HMasterInterface (minus file-format related calls)

2012-05-29 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-6036:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Marking resolved as it has been committed to trunk.  Change Status if you 
disagree.

 Add Cluster-level PB-based calls to HMasterInterface (minus file-format 
 related calls)
 --

 Key: HBASE-6036
 URL: https://issues.apache.org/jira/browse/HBASE-6036
 Project: HBase
  Issue Type: Task
  Components: ipc, master, migration
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 0.96.0

 Attachments: HBASE-6036-v2.patch, HBASE-6036.patch


 This should be a subtask of HBASE-5445, but since that is a subtask, I can't 
 also make this a subtask (apparently).
 Convert the cluster-level calls that do not touch the file-format related 
 calls (see HBASE-5453).  These are:
 IsMasterRunning
 Shutdown
 StopMaster
 Balance
 LoadBalancerIs (was synchronousBalanceSwitch/balanceSwitch)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5955) Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it

2012-05-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284979#comment-13284979
 ] 

Andrew Purtell commented on HBASE-5955:
---

bq. Does version of hadoop even matter for this issue?

You can't compile HBase 0.94 against Hadoop 2.0.x without this patch.

 Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it
 -

 Key: HBASE-5955
 URL: https://issues.apache.org/jira/browse/HBASE-5955
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.94.1


 Hadoop 2.0.0-alpha depends on Guava 11.0.2. Updating HBase dependencies to 
 match produces the following compilation errors:
 {code}
 [ERROR] SingleSizeCache.java:[41,32] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: package com.google.common.collect
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,4] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,69] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5955) Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it

2012-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284974#comment-13284974
 ] 

stack commented on HBASE-5955:
--

We want to do same for 0.94?  Does version of hadoop even matter for this issue?

Up on dev list, discussion had it that only 0.96 can require 1.0.0 hadoop as 
its minimum version.

 Guava 11 drops MapEvictionListener and Hadoop 2.0.0-alpha requires it
 -

 Key: HBASE-5955
 URL: https://issues.apache.org/jira/browse/HBASE-5955
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Andrew Purtell
Assignee: Lars Hofhansl
 Fix For: 0.94.1


 Hadoop 2.0.0-alpha depends on Guava 11.0.2. Updating HBase dependencies to 
 match produces the following compilation errors:
 {code}
 [ERROR] SingleSizeCache.java:[41,32] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: package com.google.common.collect
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,4] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 [ERROR] 
 [ERROR] SingleSizeCache.java:[94,69] cannot find symbol
 [ERROR] symbol  : class MapEvictionListener
 [ERROR] location: class org.apache.hadoop.hbase.io.hfile.slab.SingleSizeCache
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6059) Replaying recovered edits would make deleted data exist again

2012-05-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284975#comment-13284975
 ] 

ramkrishna.s.vasudevan commented on HBASE-6059:
---

@Lars
Can you have a look at this?

 Replaying recovered edits would make deleted data exist again
 -

 Key: HBASE-6059
 URL: https://issues.apache.org/jira/browse/HBASE-6059
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: 6059v6.txt, HBASE-6059-testcase.patch, HBASE-6059.patch, 
 HBASE-6059v2.patch, HBASE-6059v3.patch, HBASE-6059v4.patch, HBASE-6059v5.patch


 When we replay recovered edits, we used the minSeqId of Store, It may cause 
 deleted data appeared again.
 Let's see how it happens. Suppose the region with two families(cf1,cf2)
 1.put one data to the region (put r1,cf1:q1,v1)
 2.move the region from server A to server B.
 3.delete the data put by step 1(delete r1)
 4.flush this region.
 5.make major compaction for this region
 6.move the region from server B to server A.
 7.Abort server A
 8.After the region is online, we could get the deleted data(r1,cf1:q1,v1)
 (When we replay recovered edits, we used the minSeqId of Store, because cf2 
 has no store files, so its seqId is 0, so the edit log of put data will be 
 replayed to the region)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-6124:
--

 Summary: Backport HBASE-6033 to 0.90, 0.92 and 0.94
 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt

HBASE-6033 is pushed into 0.96. It's better to have it for previous version too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (HBASE-6089) SSH and AM.joinCluster causes Concurrent Modification exception.

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284895#comment-13284895
 ] 

Zhihong Yu edited comment on HBASE-6089 at 5/29/12 6:03 PM:


Patch for trunk looks good.

  was (Author: zhi...@ebaysf.com):
Pstch for trunk looks good.
  
 SSH and AM.joinCluster causes Concurrent Modification exception.
 

 Key: HBASE-6089
 URL: https://issues.apache.org/jira/browse/HBASE-6089
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
 Fix For: 0.90.7, 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6089_92.patch, HBASE-6089_94.patch, 
 HBASE-6089_trunk.patch


 AM.regions map is parallely accessed in SSH and Master initialization leading 
 to ConcurrentModificationException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284980#comment-13284980
 ] 

Jesse Yates commented on HBASE-6123:


looks like the script (in the same section) currently also does:
{code}
# build core and tests
  $MVN clean test -DskipTests -Dhadoop.profile=23 -D${PROJECT_NAME}PatchProcess 
 $PATCH_DIR/trunk23JavacWarnings.txt 21
  if [[ $? != 0 ]] ; then
JIRA_COMMENT=$JIRA_COMMENT

-1 hadoop23.  The patch failed to compile against the hadoop 0.23.x 
profile.
cleanupAndExit 1

{code}

in the current execution, preventing hadoopQA from actually running the tests.

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu

 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6096) AccessController v2

2012-05-29 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284981#comment-13284981
 ] 

Matteo Bertozzi commented on HBASE-6096:


1) no admin is different... able to do operation on the cluster (region move, 
unassign, ... and create/delete/modify all the tables)

2) if you grant for 'A' you don't get RWC 
so admin are not able to read but are able to perform actions 
(create/delete/modify) on all tables

3) if you grant 'W' you don't get 'R'

The permission checks are done in this way:
AccessController.permissionGranted()
 * Allow All to READ on .META. and -ROOT-
 * Allow Users with global ADMIN/CREATE to write on .META. (Add/Remove Table...)
 * Allow if user is Table Owner
 * Allow if user has Table Level rights
 * Allow if user has (Table) Family Level rights
 * Allow if user has (Table, Family) Qualifier Level rights

 AccessController v2
 ---

 Key: HBASE-6096
 URL: https://issues.apache.org/jira/browse/HBASE-6096
 Project: HBase
  Issue Type: Umbrella
  Components: security
Affects Versions: 0.96.0, 0.94.1
Reporter: Andrew Purtell

 Umbrella issue for iteration on the initial AccessController drop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6124:
---

Attachment: patch-0.94.txt
patch-0.92.txt
patch-0.90.txt

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6124:
---

  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6124:
---

Status: Patch Available  (was: Open)

Patches for 0.92 and 0.94 are similar to the original one for 0.96, except that 
a new region interface call is added.

Patch for 0.90 is different since the compaction logic is different.

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6124:
---

Affects Version/s: 0.90.6
   0.92.1
   0.94.0
Fix Version/s: 0.92.1
   0.94.1
   0.90.7

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.90.6, 0.92.1, 0.94.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.90.7, 0.92.1, 0.94.1

 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284988#comment-13284988
 ] 

Hadoop QA commented on HBASE-6124:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530074/patch-0.94.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/2031//console

This message is automatically generated.

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.90.6, 0.92.1, 0.94.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.90.7, 0.92.1, 0.94.1

 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6068) Secure HBase cluster : Client not able to call some admin APIs

2012-05-29 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284992#comment-13284992
 ] 

Matteo Bertozzi commented on HBASE-6068:


any comments/thoughts on this patch?

 Secure HBase cluster : Client not able to call some admin APIs
 --

 Key: HBASE-6068
 URL: https://issues.apache.org/jira/browse/HBASE-6068
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Anoop Sam John
Assignee: Matteo Bertozzi
 Attachments: HBASE-6068-v0.patch, HBASE-6068-v1.patch, 
 HBASE-6068-v2.patch, HBASE-6068-v3.patch


 In case of secure cluster, we allow the HBase clients to read the zk nodes by 
 providing the global read permissions to all for certain nodes. These nodes 
 are the master address znode, root server znode and the clusterId znode. In 
 ZKUtil.createACL() , we can see these node names are specially handled.
 But there are some other client side admin APIs which makes a read call into 
 the zookeeper from the client. This include the isTableEnabled() call (May be 
 some other. I have seen this).  Here the client directly reads a node in the 
 zookeeper ( node created for this table ) and the data is matched to know 
 whether this is enabled or not.
 Now in secure cluster case any client can read zookeeper nodes which it needs 
 for its normal operation like the master address and root server address.  
 But what if the client calls this API? [isTableEnaled () ].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6096) AccessController v2

2012-05-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284994#comment-13284994
 ] 

Andrew Purtell commented on HBASE-6096:
---

Thanks Matteo, concur.

IMO, it's preferable to conceptualize ADMIN permission as only an extra bit 
that allows you to interact with the Master on table management concerns.

 AccessController v2
 ---

 Key: HBASE-6096
 URL: https://issues.apache.org/jira/browse/HBASE-6096
 Project: HBase
  Issue Type: Umbrella
  Components: security
Affects Versions: 0.96.0, 0.94.1
Reporter: Andrew Purtell

 Umbrella issue for iteration on the initial AccessController drop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285001#comment-13285001
 ] 

Andrew Purtell commented on HBASE-6124:
---

+1 nice feature.

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.90.6, 0.92.1, 0.94.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.90.7, 0.92.1, 0.94.1

 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285008#comment-13285008
 ] 

Zhihong Yu commented on HBASE-6124:
---

In patch for 0.92:
{code}
+  return CompactionState.valueOf(
+rs.getCompactionState(pair.getFirst().getRegionName()));
{code}
what if user only updates jar on client side and the cluster doesn't support 
getCompactionState() ?

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.90.6, 0.92.1, 0.94.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.90.7, 0.92.1, 0.94.1

 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6097) TestHRegion.testBatchPut is flaky on 0.92

2012-05-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6097:
-

   Resolution: Fixed
Fix Version/s: 0.92.2
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 on minimal fix for an old branch rather than pull in a bunch of new code.

Thanks for the patch Gregory.

 TestHRegion.testBatchPut is flaky on 0.92
 -

 Key: HBASE-6097
 URL: https://issues.apache.org/jira/browse/HBASE-6097
 Project: HBase
  Issue Type: Bug
  Components: test, wal
Affects Versions: 0.92.1
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 0.92.2

 Attachments: HBASE-6097.patch


 If I run this test in a loop, I get failures like the following:
 Error Message:
 expected:1 but was:2
 Stack Trace:
 junit.framework.AssertionFailedError: expected:1 but was:2
 at junit.framework.Assert.fail(Assert.java:50)
 at junit.framework.Assert.failNotEquals(Assert.java:287)
 at junit.framework.Assert.assertEquals(Assert.java:67)
 at junit.framework.Assert.assertEquals(Assert.java:134)
 at junit.framework.Assert.assertEquals(Assert.java:140)
 at 
 org.apache.hadoop.hbase.regionserver.TestHRegion.testBatchPut(TestHRegion.java:536)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6124) Backport HBASE-6033 to 0.90, 0.92 and 0.94

2012-05-29 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285016#comment-13285016
 ] 

Jimmy Xiang commented on HBASE-6124:


As in the current HBase releases, new feature is backward compatible, not 
forward compatible.
In mixed deployment, it should not be used.

 Backport HBASE-6033 to 0.90, 0.92 and 0.94
 --

 Key: HBASE-6124
 URL: https://issues.apache.org/jira/browse/HBASE-6124
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.90.6, 0.92.1, 0.94.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.90.7, 0.92.1, 0.94.1

 Attachments: patch-0.90.txt, patch-0.92.txt, patch-0.94.txt


 HBASE-6033 is pushed into 0.96. It's better to have it for previous version 
 too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6055) Snapshots in HBase 0.96

2012-05-29 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285017#comment-13285017
 ] 

Jesse Yates commented on HBASE-6055:


@gaojinchao: I'm definitely still working on it - its just been a busy week, 
what with hbasecon, the hackathon and the rebase, this has been on the back 
burner. This week I'm planning to have a working first cut. Keep in mind that 
the code on github is a rough preview - definitely not the finished version, so 
no guarantees on polish or even correctness. That said, any feedback is 
appreciated.

@Jon - I'm working on a thorough response, thanks for the questions.

 Snapshots in HBase 0.96
 ---

 Key: HBASE-6055
 URL: https://issues.apache.org/jira/browse/HBASE-6055
 Project: HBase
  Issue Type: New Feature
  Components: client, master, regionserver, zookeeper
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: 0.96.0

 Attachments: Snapshots in HBase.docx


 Continuation of HBASE-50 for the current trunk. Since the implementation has 
 drastically changed, opening as a new ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6108) Use HRegion.closeHRegion instead of HRegion.close() and HRegion.getLog().close()

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285027#comment-13285027
 ] 

Hudson commented on HBASE-6108:
---

Integrated in HBase-TRUNK #2944 (See 
[https://builds.apache.org/job/HBase-TRUNK/2944/])
HBASE-6108 Use HRegion.closeHRegion instead of HRegion.close() and 
HRegion.getLog().close() (Revision 1343857)

 Result = FAILURE
stack : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultipleColumnPrefixFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestOpenedRegionHandler.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeepDeletes.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestResettingCounters.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSeekOptimizations.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java


 Use HRegion.closeHRegion instead of HRegion.close() and 
 HRegion.getLog().close()
 

 Key: HBASE-6108
 URL: https://issues.apache.org/jira/browse/HBASE-6108
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-6108.patch


 There are a bunch of places in the code like this:
 region.close();
 region.getLog().closeAndDelete();
 Instead of the better:
 HRegion.closeHRegion(region);
 We should change these for a few reasons:
 1) If we ever need to change the implementation, it's easier to change in one 
 place
 2) closeHRegion properly checks for nulls.  There are a few places where this 
 could make a difference, for example in TestOpenedRegionHandler.java it's 
 possible that an exception can be thrown before region is assigned and thus 
 region.close() could throw an NPE.  closeHRegion avoids this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6114) CacheControl flags should be tunable per table schema per CF

2012-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285030#comment-13285030
 ] 

stack commented on HBASE-6114:
--

+1 on patch

Change shouldCacheDataOnWrite to isCacheDataOnWrite on commit and same for all 
other should methods.





 CacheControl flags should be tunable per table schema per CF
 

 Key: HBASE-6114
 URL: https://issues.apache.org/jira/browse/HBASE-6114
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.2, 0.96.0, 0.94.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 6114-0.92.patch, 6114-0.94.patch, 6114-trunk.patch


 CacheControl flags should be tunable per table schema per CF, especially
 cacheDataOnWrite, also cacheIndexesOnWrite and cacheBloomsOnWrite.
 It looks like Store uses CacheConfig(Configuration conf, HColumnDescriptor 
 family) to construct the CacheConfig, so it's a simple change there to 
 override configuration properties with values of table schema attributes if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6095) ActiveMasterManager NullPointerException

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285033#comment-13285033
 ] 

Hudson commented on HBASE-6095:
---

Integrated in HBase-0.92 #425 (See 
[https://builds.apache.org/job/HBase-0.92/425/])
HBASE-6095 ActiveMasterManager NullPointerException (Revision 1343837)

 Result = FAILURE
stack : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java


 ActiveMasterManager NullPointerException
 

 Key: HBASE-6095
 URL: https://issues.apache.org/jira/browse/HBASE-6095
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.94.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Minor
 Fix For: 0.94.1

 Attachments: hbase-6095.patch


 It is for 0.94 and 0.92.  Trunk doesn't have the issue.
 {code}
   byte [] bytes =
 ZKUtil.getDataAndWatch(watcher, watcher.masterAddressZNode);
   // TODO: redo this to make it atomic (only added for tests)
   ServerName master = ServerName.parseVersionedServerName(bytes);
 {code}
 bytes could be null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6088) Region splitting not happened for long time due to ZK exception while creating RS_ZK_SPLITTING node

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285034#comment-13285034
 ] 

Hudson commented on HBASE-6088:
---

Integrated in HBase-0.92 #425 (See 
[https://builds.apache.org/job/HBase-0.92/425/])
HBASE-6088 Region splitting not happened for long time due to ZK exception 
while creating RS_ZK_SPLITTING node (Rajesh) (Revision 1343819)

 Result = FAILURE
ramkrishna : 
Files : 
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
* 
/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


  Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 

 Key: HBASE-6088
 URL: https://issues.apache.org/jira/browse/HBASE-6088
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: rajeshbabu
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-6088_92.patch, HBASE-6088_94.patch, 
 HBASE-6088_94_2.patch, HBASE-6088_94_3.patch, HBASE-6088_trunk.patch, 
 HBASE-6088_trunk_2.patch, HBASE-6088_trunk_3.patch, HBASE-6088_trunk_4.patch


 Region splitting not happened for long time due to ZK exception while 
 creating RS_ZK_SPLITTING node
 {noformat}
 2012-05-24 01:45:41,363 INFO org.apache.zookeeper.ClientCnxn: Client session 
 timed out, have not heard from server in 26668ms for sessionid 
 0x1377a75f41d0012, closing socket connection and attempting reconnect
 2012-05-24 01:45:41,464 WARN 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient 
 ZooKeeper exception: 
 org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
 = ConnectionLoss for /hbase/unassigned/bd1079bf948c672e493432020dc0e144
 {noformat}
 {noformat}
 2012-05-24 01:45:43,300 DEBUG org.apache.hadoop.hbase.regionserver.wal.HLog: 
 cleanupCurrentWriter  waiting for transactions to get synced  total 189377 
 synced till here 189365
 2012-05-24 01:45:48,474 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 java.io.IOException: Failed setting SPLITTING znode on 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:242)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTransaction.java:450)
   at 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:67)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.zookeeper.KeeperException$BadVersionException: 
 KeeperErrorCode = BadVersion for 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1246)
   at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:321)
   at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:659)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:811)
   at 
 org.apache.hadoop.hbase.zookeeper.ZKAssign.transitionNode(ZKAssign.java:747)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.transitionNodeSplitting(SplitTransaction.java:919)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createNodeSplitting(SplitTransaction.java:869)
   at 
 org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(SplitTransaction.java:239)
   ... 5 more
 2012-05-24 01:45:48,476 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Successful rollback of 
 failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.
 {noformat}
 {noformat}
 2012-05-24 01:47:28,141 ERROR 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node 
 /hbase/unassigned/bd1079bf948c672e493432020dc0e144 already exists and this is 
 not a retry
 2012-05-24 01:47:28,142 INFO 
 org.apache.hadoop.hbase.regionserver.SplitRequest: Running rollback/cleanup 
 of failed split of 
 ufdr,011365398471659,1337823505339.bd1079bf948c672e493432020dc0e144.; Failed 
 create of ephemeral 

[jira] [Commented] (HBASE-5916) RS restart just before master intialization we make the cluster non operative

2012-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285035#comment-13285035
 ] 

Hudson commented on HBASE-5916:
---

Integrated in HBase-0.92 #425 (See 
[https://builds.apache.org/job/HBase-0.92/425/])
HBASE-5916 RS restart just before master intialization we make the cluster 
non operative (Rajesh) (Revision 1343824)

 Result = FAILURE
ramkrishna : 
Files : 
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
* 
/hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenMasterInitializing.java


 RS restart just before master intialization we make the cluster non operative
 -

 Key: HBASE-5916
 URL: https://issues.apache.org/jira/browse/HBASE-5916
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1, 0.94.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5916_92.patch, HBASE-5916_94.patch, 
 HBASE-5916_trunk.patch, HBASE-5916_trunk_1.patch, HBASE-5916_trunk_1.patch, 
 HBASE-5916_trunk_2.patch, HBASE-5916_trunk_3.patch, HBASE-5916_trunk_4.patch, 
 HBASE-5916_trunk_v5.patch, HBASE-5916_trunk_v6.patch, 
 HBASE-5916_trunk_v7.patch, HBASE-5916_trunk_v8.patch, 
 HBASE-5916_trunk_v9.patch, HBASE-5916v8.patch


 Consider a case where my master is getting restarted.  RS that was alive when 
 the master restart started, gets restarted before the master initializes the 
 ServerShutDownHandler.
 {code}
 serverShutdownHandlerEnabled = true;
 {code}
 In this case when the RS tries to register with the master, the master will 
 try to expire the server but the server cannot be expired as still the 
 serverShutdownHandler is not enabled.
 This case may happen when i have only one RS gets restarted or all the RS 
 gets restarted at the same time.(before assignRootandMeta).
 {code}
 LOG.info(message);
   if (existingServer.getStartcode()  serverName.getStartcode()) {
 LOG.info(Triggering server recovery; existingServer  +
   existingServer +  looks stale, new server: + serverName);
 expireServer(existingServer);
   }
 {code}
 If another RS is brought up then the cluster comes back to normalcy.
 May be a very corner case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6125) Expose HBase config properties via JMX

2012-05-29 Thread Otis Gospodnetic (JIRA)
Otis Gospodnetic created HBASE-6125:
---

 Summary: Expose HBase config properties via JMX
 Key: HBASE-6125
 URL: https://issues.apache.org/jira/browse/HBASE-6125
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.94.0
Reporter: Otis Gospodnetic
Priority: Minor
 Fix For: 0.96.0


It would make sense to expose HBase config properties via JMX so one can 
understand how HBase was configured by looking at JMX.

See:
http://search-hadoop.com/m/siI2o1rGyAj2subj=Exposing+config+properties+via+JMX

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-6123:
--

Attachment: 6123.txt

Patch performs compilation against hadoop 2.0

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6114) CacheControl flags should be tunable per table schema per CF

2012-05-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285058#comment-13285058
 ] 

Andrew Purtell commented on HBASE-6114:
---

bq. Change shouldCacheDataOnWrite to isCacheDataOnWrite on commit and same for 
all other should methods.

Ick, but you're the boss. :-)

 CacheControl flags should be tunable per table schema per CF
 

 Key: HBASE-6114
 URL: https://issues.apache.org/jira/browse/HBASE-6114
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.2, 0.96.0, 0.94.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 6114-0.92.patch, 6114-0.94.patch, 6114-trunk.patch


 CacheControl flags should be tunable per table schema per CF, especially
 cacheDataOnWrite, also cacheIndexesOnWrite and cacheBloomsOnWrite.
 It looks like Store uses CacheConfig(Configuration conf, HColumnDescriptor 
 family) to construct the CacheConfig, so it's a simple change there to 
 override configuration properties with values of table schema attributes if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6114) CacheControl flags should be tunable per table schema per CF

2012-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285076#comment-13285076
 ] 

stack commented on HBASE-6114:
--

Just going by javabean convention for methods that return boolean  E.g. 
http://docstore.mik.ua/orelly/java-ent/jnut/ch06_02.htm

 CacheControl flags should be tunable per table schema per CF
 

 Key: HBASE-6114
 URL: https://issues.apache.org/jira/browse/HBASE-6114
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.2, 0.96.0, 0.94.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 6114-0.92.patch, 6114-0.94.patch, 6114-trunk.patch


 CacheControl flags should be tunable per table schema per CF, especially
 cacheDataOnWrite, also cacheIndexesOnWrite and cacheBloomsOnWrite.
 It looks like Store uses CacheConfig(Configuration conf, HColumnDescriptor 
 family) to construct the CacheConfig, so it's a simple change there to 
 override configuration properties with values of table schema attributes if 
 present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285078#comment-13285078
 ] 

Zhihong Yu commented on HBASE-6123:
---

Will integrate the patch if there is no objection.

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285079#comment-13285079
 ] 

Jesse Yates commented on HBASE-6123:


+1 on patch, looks good. We should consider adding help:active-profiles -X so 
we can check the output, at least for the moment.

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285084#comment-13285084
 ] 

Jesse Yates commented on HBASE-6123:


As an aside, I don't think -D${PROJECT_NAME}PatchProcess  is actually doing 
anything for us. Is there a reason we actually keep this around or is it just 
cruft from the original port of the hadoop code?

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285085#comment-13285085
 ] 

Zhihong Yu commented on HBASE-6123:
---

Thanks for the suggestion, Jesse.

Patch integrated to trunk.

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu reassigned HBASE-6123:
-

Assignee: Zhihong Yu

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6113) [eclipse] Fix eclipse import of hbase-assembly null pointer

2012-05-29 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6113:
---

Status: Open  (was: Patch Available)

canceling and resubmitting for hadoopQA

 [eclipse] Fix eclipse import of hbase-assembly null pointer
 ---

 Key: HBASE-6113
 URL: https://issues.apache.org/jira/browse/HBASE-6113
 Project: HBase
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: hbase-6113.patch


 occasionally, eclipse will throw a null pointer when attempting to import all 
 the modules via m2eclipse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6113) [eclipse] Fix eclipse import of hbase-assembly null pointer

2012-05-29 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6113:
---

Release Note: One-liner fix. Should make people using eclipse happier since 
the project should now import.  (was: One-liner fix. Should make people using 
eclipse happier.)
  Status: Patch Available  (was: Open)

 [eclipse] Fix eclipse import of hbase-assembly null pointer
 ---

 Key: HBASE-6113
 URL: https://issues.apache.org/jira/browse/HBASE-6113
 Project: HBase
  Issue Type: Bug
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: hbase-6113.patch


 occasionally, eclipse will throw a null pointer when attempting to import all 
 the modules via m2eclipse.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6107) Distributed log splitting hangs even there is no task under /hbase/splitlog

2012-05-29 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-6107:
--

Attachment: 6107_v3-new.patch

Re-attaching patch v3.

 Distributed log splitting hangs even there is no task under /hbase/splitlog
 ---

 Key: HBASE-6107
 URL: https://issues.apache.org/jira/browse/HBASE-6107
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 6107_v3-new.patch, hbase-6107.patch, 
 hbase-6107_v3-new.patch, hbase_6107_v2.patch, hbase_6107_v3.patch


 Sometimes, master web UI shows the distributed log splitting is going on, 
 waiting for one last task to be done.  However, in ZK, there is no task under 
 /hbase/splitlog at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-4720:
---

Status: Open  (was: Patch Available)

 Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
 client/server 
 

 Key: HBASE-4720
 URL: https://issues.apache.org/jira/browse/HBASE-4720
 Project: HBase
  Issue Type: Improvement
Reporter: Daniel Lord
Assignee: Mubarak Seyed
 Fix For: 0.94.1

 Attachments: 4720_trunk.patch, HBASE-4720.trunk.v1.patch, 
 HBASE-4720.trunk.v2.patch, HBASE-4720.trunk.v3.patch, 
 HBASE-4720.trunk.v4.patch, HBASE-4720.trunk.v5.patch, 
 HBASE-4720.trunk.v6.patch, HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, 
 HBASE-4720.v3.patch


 I have several large application/HBase clusters where an application node 
 will occasionally need to talk to HBase from a different cluster.  In order 
 to help ensure some of my consistency guarantees I have a sentinel table that 
 is updated atomically as users interact with the system.  This works quite 
 well for the regular hbase client but the REST client does not implement 
 the checkAndPut and checkAndDelete operations.  This exposes the application 
 to some race conditions that have to be worked around.  It would be ideal if 
 the same checkAndPut/checkAndDelete operations could be supported by the REST 
 client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-4720:
---

Fix Version/s: (was: 0.94.1)
   0.96.0

 Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
 client/server 
 

 Key: HBASE-4720
 URL: https://issues.apache.org/jira/browse/HBASE-4720
 Project: HBase
  Issue Type: Improvement
Reporter: Daniel Lord
Assignee: Mubarak Seyed
 Fix For: 0.96.0

 Attachments: 4720_trunk.patch, HBASE-4720.trunk.v1.patch, 
 HBASE-4720.trunk.v2.patch, HBASE-4720.trunk.v3.patch, 
 HBASE-4720.trunk.v4.patch, HBASE-4720.trunk.v5.patch, 
 HBASE-4720.trunk.v6.patch, HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, 
 HBASE-4720.v3.patch


 I have several large application/HBase clusters where an application node 
 will occasionally need to talk to HBase from a different cluster.  In order 
 to help ensure some of my consistency guarantees I have a sentinel table that 
 is updated atomically as users interact with the system.  This works quite 
 well for the regular hbase client but the REST client does not implement 
 the checkAndPut and checkAndDelete operations.  This exposes the application 
 to some race conditions that have to be worked around.  It would be ideal if 
 the same checkAndPut/checkAndDelete operations could be supported by the REST 
 client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-4720:
---

Attachment: 4720_trunk.patch

 Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
 client/server 
 

 Key: HBASE-4720
 URL: https://issues.apache.org/jira/browse/HBASE-4720
 Project: HBase
  Issue Type: Improvement
Reporter: Daniel Lord
Assignee: Mubarak Seyed
 Fix For: 0.96.0

 Attachments: 4720_trunk.patch, HBASE-4720.trunk.v1.patch, 
 HBASE-4720.trunk.v2.patch, HBASE-4720.trunk.v3.patch, 
 HBASE-4720.trunk.v4.patch, HBASE-4720.trunk.v5.patch, 
 HBASE-4720.trunk.v6.patch, HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, 
 HBASE-4720.v3.patch


 I have several large application/HBase clusters where an application node 
 will occasionally need to talk to HBase from a different cluster.  In order 
 to help ensure some of my consistency guarantees I have a sentinel table that 
 is updated atomically as users interact with the system.  This works quite 
 well for the regular hbase client but the REST client does not implement 
 the checkAndPut and checkAndDelete operations.  This exposes the application 
 to some race conditions that have to be worked around.  It would be ideal if 
 the same checkAndPut/checkAndDelete operations could be supported by the REST 
 client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server

2012-05-29 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-4720:
---

Status: Patch Available  (was: Open)

I enhanced Mubarak's patch a little bit and rebased it to the latest trunk 
branch.

Unit test is green:

mvn -PrunMediumTests -Dtest=Test*Resource clean test

 Implement atomic update operations (checkAndPut, checkAndDelete) for REST 
 client/server 
 

 Key: HBASE-4720
 URL: https://issues.apache.org/jira/browse/HBASE-4720
 Project: HBase
  Issue Type: Improvement
Reporter: Daniel Lord
Assignee: Mubarak Seyed
 Fix For: 0.96.0

 Attachments: 4720_trunk.patch, HBASE-4720.trunk.v1.patch, 
 HBASE-4720.trunk.v2.patch, HBASE-4720.trunk.v3.patch, 
 HBASE-4720.trunk.v4.patch, HBASE-4720.trunk.v5.patch, 
 HBASE-4720.trunk.v6.patch, HBASE-4720.trunk.v7.patch, HBASE-4720.v1.patch, 
 HBASE-4720.v3.patch


 I have several large application/HBase clusters where an application node 
 will occasionally need to talk to HBase from a different cluster.  In order 
 to help ensure some of my consistency guarantees I have a sentinel table that 
 is updated atomically as users interact with the system.  This works quite 
 well for the regular hbase client but the REST client does not implement 
 the checkAndPut and checkAndDelete operations.  This exposes the application 
 to some race conditions that have to be worked around.  It would be ideal if 
 the same checkAndPut/checkAndDelete operations could be supported by the REST 
 client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-6123:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: In Progress)

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-6126) Fix broke TestLocalHBaseCluster in 0.92/0.94

2012-05-29 Thread stack (JIRA)
stack created HBASE-6126:


 Summary: Fix broke TestLocalHBaseCluster in 0.92/0.94
 Key: HBASE-6126
 URL: https://issues.apache.org/jira/browse/HBASE-6126
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack


Related to HBASE-6100

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HBASE-6123:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Hadoop QA got pass compilation against hadoop 2.0:
https://builds.apache.org/job/PreCommit-HBASE-Build/2032/console

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Work started] (HBASE-6123) dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead of hadoop 0.23

2012-05-29 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-6123 started by Zhihong Yu.

 dev-support/test-patch.sh should compile against hadoop 2.0.0-alpha instead 
 of hadoop 0.23
 --

 Key: HBASE-6123
 URL: https://issues.apache.org/jira/browse/HBASE-6123
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Zhihong Yu
 Attachments: 6123.txt


 test-patch.sh currently does this:
 {code}
   $MVN clean test -DskipTests -Dhadoop.profile=23 
 -D${PROJECT_NAME}PatchProcess  $PATCH_DIR/trunk23JavacWarnings.txt 21
 {code}
 we should compile against hadoop 2.0.0-alpha

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-6126) Fix broke TestLocalHBaseCluster in 0.92/0.94

2012-05-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6126:
-

Attachment: fixTestLocalHBaseCluster.txt

One line fix took care of it failing locally for me.  Going to commit.

 Fix broke TestLocalHBaseCluster in 0.92/0.94
 

 Key: HBASE-6126
 URL: https://issues.apache.org/jira/browse/HBASE-6126
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: fixTestLocalHBaseCluster.txt


 Related to HBASE-6100

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-6126) Fix broke TestLocalHBaseCluster in 0.92/0.94

2012-05-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6126.
--

   Resolution: Fixed
Fix Version/s: 0.94.1
   0.92.2

Applied to 0.92 and 0.94 branches.  TRUNK already had this change.

 Fix broke TestLocalHBaseCluster in 0.92/0.94
 

 Key: HBASE-6126
 URL: https://issues.apache.org/jira/browse/HBASE-6126
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.2, 0.94.1

 Attachments: fixTestLocalHBaseCluster.txt


 Related to HBASE-6100

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   >