[jira] [Created] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread chunhui shen (JIRA)
chunhui shen created HBASE-7669:
---

 Summary: ROOT region wouldn't  be handled by PRI-IPC-Handler
 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0


RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7669:


Attachment: HBASE-7669.patch

 ROOT region wouldn't  be handled by PRI-IPC-Handler
 ---

 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7669.patch


 RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
 same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7670) Synchronized operation in CatalogTracker would block handling ZK Event for long time

2013-01-25 Thread chunhui shen (JIRA)
chunhui shen created HBASE-7670:
---

 Summary: Synchronized operation in CatalogTracker would block 
handling ZK Event for long time
 Key: HBASE-7670
 URL: https://issues.apache.org/jira/browse/HBASE-7670
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0
 Attachments: HBASE-7670.patch

We found ZK event not be watched by master for a  long time in our testing.
It seems one ZK-Event-Handle thread block it.
Attaching some logs on master
{code}
2013-01-16 22:18:55,667 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: 
Handling transition=RS_ZK_REGION_OPENED, 
2013-01-16 22:18:56,270 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: 
Handling transition=RS_ZK_REGION_OPENED, 
...
2013-01-16 23:55:33,259 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: 
Retrying
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=100, exceptions:
at 
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:676)
at org.apache.hadoop.hbase.catalog.MetaReader.get(MetaReader.java:247)
at 
org.apache.hadoop.hbase.catalog.MetaReader.getRegion(MetaReader.java:349)
at 
org.apache.hadoop.hbase.catalog.MetaReader.readRegionLocation(MetaReader.java:289)
at 
org.apache.hadoop.hbase.catalog.MetaReader.getMetaRegionLocation(MetaReader.java:276)
at 
org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:424)
at 
org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:489)
at 
org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:451)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:289)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2013-01-16 23:55:33,261 WARN org.apache.hadoop.hbase.master.AssignmentManager: 
Attempted to handle region transition for server but server is not online
{code}

Between 2013-01-16 22:18:56 and 2013-01-16 23:55:33, there is no any logs about 
handling ZK Event.


{code}
this.metaNodeTracker = new MetaNodeTracker(zookeeper, throwableAborter) {
  public void nodeDeleted(String path) {
if (!path.equals(node)) return;
ct.resetMetaLocation();
  }
}
public void resetMetaLocation() {
LOG.debug(Current cached META location,  + metaLocation +
  , is not valid, resetting);
synchronized(this.metaAvailable) {
  this.metaAvailable.set(false);
  this.metaAvailable.notifyAll();
}
  }

private AdminProtocol getMetaServerConnection(){
synchronized (metaAvailable){
...
ServerName newLocation = MetaReader.getMetaRegionLocation(this);
...
}
}
{code}

From the above code, we would found that nodeDeleted() would wait synchronized 
(metaAvailable) until MetaReader.getMetaRegionLocation(this) done,
however, getMetaRegionLocation() could be retrying for a long time

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7671) Flushing memstore again after last failure could cause data loss

2013-01-25 Thread chunhui shen (JIRA)
chunhui shen created HBASE-7671:
---

 Summary: Flushing memstore again after last failure could cause 
data loss
 Key: HBASE-7671
 URL: https://issues.apache.org/jira/browse/HBASE-7671
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0


See the following logs first:
{code}
2013-01-23 18:58:38,801 INFO org.apache.hadoop.hbase.regionserver.Store: 
Flushed , sequenceid=9746535080, memsize=101.8m, into tmp file 
hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/bebeeecc56364b6c8126cf1dc6782a25

2013-01-23 18:58:41,982 WARN org.apache.hadoop.hbase.regionserver.MemStore: 
Snapshot called again without clearing previous. Doing nothing. Another ongoing 
flush or did we fail last attempt?


2013-01-23 18:58:43,274 INFO org.apache.hadoop.hbase.regionserver.Store: 
Flushed , sequenceid=9746599334, memsize=101.8m, into tmp file 
hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/4eede32dc469480bb3d469aaff332313
{code}

The first time memstore flush is failed when commitFile()(Logged the first edit 
above), then trigger server abort, but another flush is coming 
immediately(could caused by move/split,Logged the third edit above) and 
successful.

For the same memstore's snapshot, we get different sequenceid, it causes data 
loss when replaying log edits

See details from the unit test case in the patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7670) Synchronized operation in CatalogTracker would block handling ZK Event for long time

2013-01-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7670:


Attachment: HBASE-7670.patch

 Synchronized operation in CatalogTracker would block handling ZK Event for 
 long time
 

 Key: HBASE-7670
 URL: https://issues.apache.org/jira/browse/HBASE-7670
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7670.patch


 We found ZK event not be watched by master for a  long time in our testing.
 It seems one ZK-Event-Handle thread block it.
 Attaching some logs on master
 {code}
 2013-01-16 22:18:55,667 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, 
 2013-01-16 22:18:56,270 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, 
 ...
 2013-01-16 23:55:33,259 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: 
 Retrying
 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
 attempts=100, exceptions:
 at 
 org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:676)
 at org.apache.hadoop.hbase.catalog.MetaReader.get(MetaReader.java:247)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.getRegion(MetaReader.java:349)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.readRegionLocation(MetaReader.java:289)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.getMetaRegionLocation(MetaReader.java:276)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:424)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:489)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:451)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:289)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 2013-01-16 23:55:33,261 WARN 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to handle region 
 transition for server but server is not online
 {code}
 Between 2013-01-16 22:18:56 and 2013-01-16 23:55:33, there is no any logs 
 about handling ZK Event.
 {code}
 this.metaNodeTracker = new MetaNodeTracker(zookeeper, throwableAborter) {
   public void nodeDeleted(String path) {
 if (!path.equals(node)) return;
 ct.resetMetaLocation();
   }
 }
 public void resetMetaLocation() {
 LOG.debug(Current cached META location,  + metaLocation +
   , is not valid, resetting);
 synchronized(this.metaAvailable) {
   this.metaAvailable.set(false);
   this.metaAvailable.notifyAll();
 }
   }
 private AdminProtocol getMetaServerConnection(){
 synchronized (metaAvailable){
 ...
 ServerName newLocation = MetaReader.getMetaRegionLocation(this);
 ...
 }
 }
 {code}
 From the above code, we would found that nodeDeleted() would wait 
 synchronized (metaAvailable) until MetaReader.getMetaRegionLocation(this) 
 done,
 however, getMetaRegionLocation() could be retrying for a long time

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7671) Flushing memstore again after last failure could cause data loss

2013-01-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7671:


Attachment: HBASE-7671.patch

 Flushing memstore again after last failure could cause data loss
 

 Key: HBASE-7671
 URL: https://issues.apache.org/jira/browse/HBASE-7671
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7671.patch


 See the following logs first:
 {code}
 2013-01-23 18:58:38,801 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Flushed , sequenceid=9746535080, memsize=101.8m, into tmp file 
 hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/bebeeecc56364b6c8126cf1dc6782a25
 2013-01-23 18:58:41,982 WARN org.apache.hadoop.hbase.regionserver.MemStore: 
 Snapshot called again without clearing previous. Doing nothing. Another 
 ongoing flush or did we fail last attempt?
 2013-01-23 18:58:43,274 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Flushed , sequenceid=9746599334, memsize=101.8m, into tmp file 
 hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/4eede32dc469480bb3d469aaff332313
 {code}
 The first time memstore flush is failed when commitFile()(Logged the first 
 edit above), then trigger server abort, but another flush is coming 
 immediately(could caused by move/split,Logged the third edit above) and 
 successful.
 For the same memstore's snapshot, we get different sequenceid, it causes data 
 loss when replaying log edits
 See details from the unit test case in the patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7672) Merging compaction requests in the queue for same store

2013-01-25 Thread chunhui shen (JIRA)
chunhui shen created HBASE-7672:
---

 Summary: Merging compaction requests in the queue for same store
 Key: HBASE-7672
 URL: https://issues.apache.org/jira/browse/HBASE-7672
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0


With a high write presesure, we could found many compaction requests for same 
store in the compaction queue.

I think we could merge compaction requests for same store to increase 
compaction efficiency greately. It is so in 0.90 version because doing 
compacting files selection only when executing compaction

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7672) Merging compaction requests in the queue for same store

2013-01-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7672:


Attachment: HBASE-7672.patch

 Merging compaction requests in the queue for same store
 ---

 Key: HBASE-7672
 URL: https://issues.apache.org/jira/browse/HBASE-7672
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7672.patch


 With a high write presesure, we could found many compaction requests for same 
 store in the compaction queue.
 I think we could merge compaction requests for same store to increase 
 compaction efficiency greately. It is so in 0.90 version because doing 
 compacting files selection only when executing compaction

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7671) Flushing memstore again after last failure could cause data loss

2013-01-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562535#comment-13562535
 ] 

ramkrishna.s.vasudevan commented on HBASE-7671:
---

Nice one :)

 Flushing memstore again after last failure could cause data loss
 

 Key: HBASE-7671
 URL: https://issues.apache.org/jira/browse/HBASE-7671
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7671.patch


 See the following logs first:
 {code}
 2013-01-23 18:58:38,801 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Flushed , sequenceid=9746535080, memsize=101.8m, into tmp file 
 hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/bebeeecc56364b6c8126cf1dc6782a25
 2013-01-23 18:58:41,982 WARN org.apache.hadoop.hbase.regionserver.MemStore: 
 Snapshot called again without clearing previous. Doing nothing. Another 
 ongoing flush or did we fail last attempt?
 2013-01-23 18:58:43,274 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Flushed , sequenceid=9746599334, memsize=101.8m, into tmp file 
 hdfs://dw77.kgb.sqa.cm4:9900/hbase-test3/writetest1/8dc14e35b4d7c0e481e0bb30849cff7d/.tmp/4eede32dc469480bb3d469aaff332313
 {code}
 The first time memstore flush is failed when commitFile()(Logged the first 
 edit above), then trigger server abort, but another flush is coming 
 immediately(could caused by move/split,Logged the third edit above) and 
 successful.
 For the same memstore's snapshot, we get different sequenceid, it causes data 
 loss when replaying log edits
 See details from the unit test case in the patch

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7651) RegionServerSnapshotManager fails with CancellationException if previous snapshot fails in per region task

2013-01-25 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562539#comment-13562539
 ] 

Jonathan Hsieh commented on HBASE-7651:
---

If we want to change it, let's file a follow on jira to change it..  I believe 
it could be helpful.  

 RegionServerSnapshotManager fails with CancellationException if previous 
 snapshot fails in per region task
 --

 Key: HBASE-7651
 URL: https://issues.apache.org/jira/browse/HBASE-7651
 Project: HBase
  Issue Type: Sub-task
  Components: snapshots
Affects Versions: hbase-7290
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: hbase-7290

 Attachments: hbase-7651.patch, hbase-7651.v2.patch


 I've reproduced this problem consistently on a 20 node cluster.
 The first run fails on a node (jon-snaphots-2 in this case) to take snapshot 
 due to a NotServingRegionException (this is acceptable)
 {code}
 2013-01-23 13:32:48,631 DEBUG 
 org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher:  accepting 
 received exception
 org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via 
 jon-snapshots-2.ent.cloudera.com,22101,1358976524369:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable:
  org.apache.hadoop.hbase.NotServingRegionException: 
 TestTable,0002493652,1358976652443.b858147ad87a7812ac9a73dd8fef36ad. is 
 closing
 at 
 org.apache.hadoop.hbase.errorhandling.ForeignException.deserialize(ForeignException.java:184)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureCoordinatorRpcs.abort(ZKProcedureCoordinatorRpcs.java:240)
 at 
 org.apache.hadoop.hbase.procedure.ZKProcedureCoordinatorRpcs$1.nodeCreated(ZKProcedureCoordinatorRpcs.java:182)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:294)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:519)
 at 
 org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:495)
 Caused by: 
 org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable: 
 org.apache.hadoop.hbase.NotServingRegionException: 
 TestTable,0002493652,1358976652443.b858147ad87a7812ac9a73dd8fef36ad. is 
 closing
 at 
 org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:343)
 at 
 org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:107)
 at 
 org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:123)
 at 
 org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:181)
 at 
 org.apache.hadoop.hbase.procedure.Subprocedure.call(Subprocedure.java:52)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 2013-01-23 13:32:48,631 DEBUG 
 org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher:  Recieved 
 error, notifying listeners...
 2013-01-23 13:32:48,730 ERROR org.apache.hadoop.hbase.procedure.Procedure: 
 Procedure 'pe-6' execution failed!
 org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable via 
 jon-snapshots-2.ent.cloudera.com,22101,1358976524369:org.apache.hadoop.hbase.errorhandling.ForeignException$ProxyThrowable:
  org.apache.hadoop.hbase.NotServingRegionException: 
 TestTable,0002493652,1358976652443.b858147ad87a7812ac9a73dd8fef36ad. is 
 closing
 at 
 org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:84)
 at 
 org.apache.hadoop.hbase.procedure.Procedure.waitForLatch(Procedure.java:357)
 at 
 org.apache.hadoop.hbase.procedure.Procedure.call(Procedure.java:203)
 at org.apache.hadoop.hbase.procedure.Procedure.call(Procedure.java:68)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: 
 

[jira] [Updated] (HBASE-7672) Merging compaction requests in the queue for same store

2013-01-25 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-7672:


Description: 
With a high write presesure, we could found many compaction requests for same 
store in the compaction queue.

I think we could merge compaction requests for same store to increase 
compaction efficiency greately. It is so in 0.90 version because doing 
compacting files selection only when executing compaction

e.g.
{code}
SmallCompation active count:1,Queue:
regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
 storeName=c1, fileCount=4, fileSize=371.1m (212.0m, 53.0m, 53.0m, 53.0m), 
priority=15, time=56843340270506608
regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
 storeName=c1, fileCount=4, fileSize=330.4m (171.3m, 53.0m, 53.0m, 53.0m), 
priority=11, time=56843401092063608
{code}
We could merge these two compaction requests

  was:
With a high write presesure, we could found many compaction requests for same 
store in the compaction queue.

I think we could merge compaction requests for same store to increase 
compaction efficiency greately. It is so in 0.90 version because doing 
compacting files selection only when executing compaction


 Merging compaction requests in the queue for same store
 ---

 Key: HBASE-7672
 URL: https://issues.apache.org/jira/browse/HBASE-7672
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7672.patch


 With a high write presesure, we could found many compaction requests for same 
 store in the compaction queue.
 I think we could merge compaction requests for same store to increase 
 compaction efficiency greately. It is so in 0.90 version because doing 
 compacting files selection only when executing compaction
 e.g.
 {code}
 SmallCompation active count:1,Queue:
 regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
  storeName=c1, fileCount=4, fileSize=371.1m (212.0m, 53.0m, 53.0m, 53.0m), 
 priority=15, time=56843340270506608
 regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
  storeName=c1, fileCount=4, fileSize=330.4m (171.3m, 53.0m, 53.0m, 53.0m), 
 priority=11, time=56843401092063608
 {code}
 We could merge these two compaction requests

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7634) Replication handling of changes to peer clusters is inefficient

2013-01-25 Thread Gabriel Reid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabriel Reid updated HBASE-7634:


Attachment: HBASE-7634.v3.patch

Updated patch with formatting issues fix, and made 
TestReplicationSinkManager#testReportBadSink_DownToZeroSinks deterministic

 Replication handling of changes to peer clusters is inefficient
 ---

 Key: HBASE-7634
 URL: https://issues.apache.org/jira/browse/HBASE-7634
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
 Attachments: HBASE-7634.patch, HBASE-7634.v2.patch, 
 HBASE-7634.v3.patch


 The current handling of changes to the region servers in a replication peer 
 cluster is currently quite inefficient. The list of region servers that are 
 being replicated to is only updated if there are a large number of issues 
 encountered while replicating.
 This can cause it to take quite a while to recognize that a number of the 
 regionserver in a peer cluster are no longer available. A potentially bigger 
 problem is that if a replication peer cluster is started with a small number 
 of regionservers, and then more region servers are added after replication 
 has started, the additional region servers will never be used for replication 
 (unless there are failures on the in-use regionservers).
 Part of the current issue is that the retry code in 
 ReplicationSource#shipEdits checks a randomly-chosen replication peer 
 regionserver (in ReplicationSource#isSlaveDown) to see if it is up after a 
 replication write has failed on a different randonly-chosen replication peer. 
 If the peer is seen as not down, another randomly-chosen peer is used for 
 writing.
 A second part of the issue is that changes to the list of region servers in a 
 peer cluster are not detected at all, and are only picked up if a certain 
 number of failures have occurred when trying to ship edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5843) Improve HBase MTTR - Mean Time To Recover

2013-01-25 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562555#comment-13562555
 ] 

nkeywal commented on HBASE-5843:


bq. That seems pretty long and should be improved. 
Yep, the default was set for new users that won't understand the impact. You 
should change it if you care about mttr.

bq. That seems can simplify the process. What do you think? 
It's important to have the feature in the core, if not the feature is not 
tested hence does not work, or does not work for long. There is a jira to 
support a standard administration tool in the core (I can't find it, but it 
exists for sure).

 Improve HBase MTTR - Mean Time To Recover
 -

 Key: HBASE-5843
 URL: https://issues.apache.org/jira/browse/HBASE-5843
 Project: HBase
  Issue Type: Umbrella
Affects Versions: 0.96.0
Reporter: nkeywal
Assignee: nkeywal

 A part of the approach is described here: 
 https://docs.google.com/document/d/1z03xRoZrIJmg7jsWuyKYl6zNournF_7ZHzdi0qz_B4c/edit
 The ideal target is:
 - failure impact client applications only by an added delay to execute a 
 query, whatever the failure.
 - this delay is always inferior to 1 second.
 We're not going to achieve that immediately...
 Priority will be given to the most frequent issues.
 Short term:
 - software crash
 - standard administrative tasks as stop/start of a cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6770) Allow scanner setCaching to specify size instead of number of rows

2013-01-25 Thread terry zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562557#comment-13562557
 ] 

terry zhang commented on HBASE-6770:


hi Karthik Ranganathan , I saw this patch had checked in to fb branch 0.89-fb 
last October. when are we going to check it to trunk. This is a good feature to 
avoid rs OOM.

 Allow scanner setCaching to specify size instead of number of rows
 --

 Key: HBASE-6770
 URL: https://issues.apache.org/jira/browse/HBASE-6770
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Karthik Ranganathan
Assignee: Chen Jin

 Currently, we have the following api's to customize the behavior of scans:
 setCaching() - how many rows to cache on client to speed up scans
 setBatch() - max columns per row to return per row to prevent a very large 
 response.
 Ideally, we should be able to specify a memory buffer size because:
 1. that would take care of both of these use cases.
 2. it does not need any knowledge of the size of the rows or cells, as the 
 final thing we are worried about is the available memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7634) Replication handling of changes to peer clusters is inefficient

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562576#comment-13562576
 ] 

Hadoop QA commented on HBASE-7634:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566489/HBASE-7634.v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4178//console

This message is automatically generated.

 Replication handling of changes to peer clusters is inefficient
 ---

 Key: HBASE-7634
 URL: https://issues.apache.org/jira/browse/HBASE-7634
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
 Attachments: HBASE-7634.patch, HBASE-7634.v2.patch, 
 HBASE-7634.v3.patch


 The current handling of changes to the region servers in a replication peer 
 cluster is currently quite inefficient. The list of region servers that are 
 being replicated to is only updated if there are a large number of issues 
 encountered while replicating.
 This can cause it to take quite a while to recognize that a number of the 
 regionserver in a peer cluster are no longer available. A potentially bigger 
 problem is that if a replication peer cluster is started with a small number 
 of regionservers, and then more region servers are added after replication 
 has started, the additional region servers will never be used for replication 
 (unless there are failures on the in-use regionservers).
 Part of the current issue is that the retry code in 
 ReplicationSource#shipEdits checks a randomly-chosen replication peer 
 regionserver (in ReplicationSource#isSlaveDown) to see if it is up after a 
 replication write has failed on a different randonly-chosen replication peer. 
 If the peer is seen as not down, another randomly-chosen peer is used for 
 writing.
 A second part of the issue is that changes to the list of region servers in a 
 peer cluster are not detected at all, and are only picked up if a certain 
 number of failures have occurred when trying to ship edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7657) Make ModifyTableHandler synchronous

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562629#comment-13562629
 ] 

Hudson commented on HBASE-7657:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #374 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/374/])
HBASE-7657 Make ModifyTableHandler synchronous (Himanshu Vashishtha) 
(Revision 1438298)

 Result = FAILURE
mbertozzi : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java


 Make ModifyTableHandler synchronous
 ---

 Key: HBASE-7657
 URL: https://issues.apache.org/jira/browse/HBASE-7657
 Project: HBase
  Issue Type: Bug
  Components: Admin, Client
Affects Versions: 0.96.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBASE-7657.patch


 This is along the lines of other admin operations such as modifyColumnFamily, 
 AddColumnFamily to make it a synchronous op.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7665) retry time sequence usage in HConnectionManager has off-by-one error

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562631#comment-13562631
 ] 

Hudson commented on HBASE-7665:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #374 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/374/])
HBASE-7665 retry time sequence usage in HConnectionManager has off-by-one 
error (Sergey) (Revision 1438332)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java


 retry time sequence usage in HConnectionManager has off-by-one error 
 -

 Key: HBASE-7665
 URL: https://issues.apache.org/jira/browse/HBASE-7665
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-7665-v0.patch


 Array of retries starts with element #0, but we never pass 0 into 
 ConnectionUtils::getPauseTime - curNumRetries is 1 or higher.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7382) Port ZK.multi support from HBASE-6775 to 0.96

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562630#comment-13562630
 ] 

Hudson commented on HBASE-7382:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #374 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/374/])
HBASE-7382 Port ZK.multi support from HBASE-6775 to 0.96 (Gregory, Himanshu 
and Ted) (Revision 1438317)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* /hbase/trunk/hbase-server/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKMulti.java


 Port ZK.multi support from HBASE-6775 to 0.96
 -

 Key: HBASE-7382
 URL: https://issues.apache.org/jira/browse/HBASE-7382
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Reporter: Gregory Chanan
Assignee: Himanshu Vashishtha
Priority: Critical
 Fix For: 0.96.0

 Attachments: 7382-trunk-v3.txt, 7382-trunk-v4.txt, 7382-trunk-v5.txt, 
 7382-trunk-v6.txt, HBASE-7382-trunk.patch


 HBASE-6775 adds support for ZK.multi ZKUtil and uses it for the 0.92/0.94 
 compatibility fix implemented in HBASE-6710.
 ZK.multi support is most likely useful in 0.96, but since HBASE-6710 is not 
 relevant for 0.96, perhaps we should find another use case first before we 
 port.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6775) Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562632#comment-13562632
 ] 

Hudson commented on HBASE-6775:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #374 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/374/])
HBASE-7382 Port ZK.multi support from HBASE-6775 to 0.96 (Gregory, Himanshu 
and Ted) (Revision 1438317)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* /hbase/trunk/hbase-server/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKMulti.java


 Use ZK.multi when available for HBASE-6710 0.92/0.94 compatibility fix
 --

 Key: HBASE-6775
 URL: https://issues.apache.org/jira/browse/HBASE-6775
 Project: HBase
  Issue Type: Improvement
  Components: Zookeeper
Affects Versions: 0.94.2
Reporter: Gregory Chanan
Assignee: Gregory Chanan
Priority: Minor
 Fix For: 0.94.4

 Attachments: HBASE-6775-v2.patch


 This issue introduces the ability for the HMaster to make use of ZooKeeper's 
 multi-update functionality.  This allows certain ZooKeeper operations to 
 complete more quickly and prevents some issues with rare ZooKeeper failure 
 scenarios (see the release note of HBASE-6710 for an example).  This feature 
 is off by default; to enable set hbase.zookeeper.useMulti to true in the 
 configuration of the HMaster.
 IMPORTANT: hbase.zookeeper.useMulti should only be set to true if all 
 ZooKeeper servers in the cluster are on version 3.4+ and will not be 
 downgraded.  ZooKeeper versions before 3.4 do not support multi-update and 
 will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7673) Incorrect error logging when a replication peer is removed

2013-01-25 Thread Gabriel Reid (JIRA)
Gabriel Reid created HBASE-7673:
---

 Summary: Incorrect error logging when a replication peer is removed
 Key: HBASE-7673
 URL: https://issues.apache.org/jira/browse/HBASE-7673
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
Priority: Minor
 Attachments: HBASE-7673.patch

When a replication peer is removed (and all goes well), the following error is 
still logged:
{noformat}[ERROR][14:14:21,504][ventThread] 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager - The 
queue we wanted to close is missing peer-state{noformat}

This is due to a watch being set on the peer-state node under the replication 
peer node in ZooKeeper, and the ReplicationSource#PeersWatcher doesn't 
correctly discern between nodes when it gets nodeDeleted messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7673) Incorrect error logging when a replication peer is removed

2013-01-25 Thread Gabriel Reid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabriel Reid updated HBASE-7673:


Attachment: HBASE-7673.patch

Patch to make the PeersWatcher aware of what a valid path is to a peer node in 
ZK, and ignore nodeDeleted messages for other nodes.

 Incorrect error logging when a replication peer is removed
 --

 Key: HBASE-7673
 URL: https://issues.apache.org/jira/browse/HBASE-7673
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
Priority: Minor
 Attachments: HBASE-7673.patch


 When a replication peer is removed (and all goes well), the following error 
 is still logged:
 {noformat}[ERROR][14:14:21,504][ventThread] 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager - 
 The queue we wanted to close is missing peer-state{noformat}
 This is due to a watch being set on the peer-state node under the replication 
 peer node in ZooKeeper, and the ReplicationSource#PeersWatcher doesn't 
 correctly discern between nodes when it gets nodeDeleted messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7673) Incorrect error logging when a replication peer is removed

2013-01-25 Thread Gabriel Reid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabriel Reid updated HBASE-7673:


Status: Patch Available  (was: Open)

 Incorrect error logging when a replication peer is removed
 --

 Key: HBASE-7673
 URL: https://issues.apache.org/jira/browse/HBASE-7673
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
Priority: Minor
 Attachments: HBASE-7673.patch


 When a replication peer is removed (and all goes well), the following error 
 is still logged:
 {noformat}[ERROR][14:14:21,504][ventThread] 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager - 
 The queue we wanted to close is missing peer-state{noformat}
 This is due to a watch being set on the peer-state node under the replication 
 peer node in ZooKeeper, and the ReplicationSource#PeersWatcher doesn't 
 correctly discern between nodes when it gets nodeDeleted messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562663#comment-13562663
 ] 

Jean-Marc Spaggiari commented on HBASE-7503:


Hi Ted,

To detect the complexiti and go without the sorting option, this will add 
almost the same effort are simply always sorting.

Sorting the regions will take R x log(R).
Sorting the gets will take G x log(g).
Looping through the sorting lists will take max (G, R).

So basically, at the end, the complexity will be close to n.log(n)

Not doing the sort will always give is something like n² because of the nested 
loop... 

I will change the implementation.

In Sergey's example above, even if we are setting the gets to null to skip them 
faster, we are still doing the entire loop2. It's a bit more optimal than what 
I did, but we still have n².

Let me think about that and I will come back with a proposal.

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, 
 HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, 
 HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, 
 HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7221) RowKey utility class for rowkey construction

2013-01-25 Thread Doug Meil (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562666#comment-13562666
 ] 

Doug Meil commented on HBASE-7221:
--

Hi folks.  The issue with the stateless approach is that nobody has a proposal 
that will work with both writing *and* reading the key.  And given the high 
intelligence of the folks already commenting on this ticket means that the 
probability of such an approach existing is slim, because if it was obvious I 
think somebody would have thought of it by now.  I don't see how this can be 
anything but a stateful object.

 RowKey utility class for rowkey construction
 

 Key: HBASE-7221
 URL: https://issues.apache.org/jira/browse/HBASE-7221
 Project: HBase
  Issue Type: Improvement
Reporter: Doug Meil
Assignee: Doug Meil
Priority: Minor
 Attachments: HBASE_7221.patch, hbase-common_hbase_7221_2.patch, 
 hbase-common_hbase_7221_v3.patch


 A common question in the dist-lists is how to construct rowkeys, particularly 
 composite keys.  Put/Get/Scan specifies byte[] as the rowkey, but it's up to 
 you to sensibly populate that byte-array, and that's where things tend to go 
 off the rails.
 The intent of this RowKey utility class isn't meant to add functionality into 
 Put/Get/Scan, but rather make it simpler for folks to construct said arrays.  
 Example:
 {code}
RowKey key = RowKey.create(RowKey.SIZEOF_MD5_HASH + RowKey.SIZEOF_LONG);
key.addHash(a);
key.add(b);
byte bytes[] = key.getBytes();
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562674#comment-13562674
 ] 

Ted Yu commented on HBASE-7503:
---

You don't need to sort the regions in the table. The start and end keys are 
sorted already. 

What does n represent above ?

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, 
 HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, 
 HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, 
 HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2013-01-25 Thread Gabriel Reid (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabriel Reid updated HBASE-7122:


Attachment: HBASE-7122.v2.patch

We've just started having some issues with this bug as well -- trunk patch is 
attached. 

It's been tested against the replication unit tests, as well as a small-scale 
cluster test to ensure it does what it's supposed to.

 Proper warning message when opening a log file with no entries (idle cluster)
 -

 Key: HBASE-7122
 URL: https://issues.apache.org/jira/browse/HBASE-7122
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Affects Versions: 0.94.2
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBase-7122.patch, HBASE-7122.v2.patch


 In case the cluster is idle and the log has rolled (offset to 0), 
 replicationSource tries to open the log and gets an EOF exception. This gets 
 printed after every 10 sec until an entry is inserted in it.
 {code}
 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(487)) - Opening log for replication 
 c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(543)) - 1 Got: 
 java.io.EOFException
   at java.io.DataInputStream.readFully(DataInputStream.java:180)
   at java.io.DataInputStream.readFully(DataInputStream.java:152)
   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1475)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1470)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:55)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
   at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
   at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(547)) - Waited too long for this file, 
 considering dumping
 2012-11-07 15:47:40,927 DEBUG regionserver.ReplicationSource 
 (ReplicationSource.java:sleepForRetries(562)) - Unable to open a reader, 
 sleeping 1000 times 10
 {code}
 We should reduce the log spewing in this case (or some informative message, 
 based on the offset).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562680#comment-13562680
 ] 

Jean-Marc Spaggiari commented on HBASE-7503:


I was wondering for the regions :) I was looking at the code while our comment 
arrived ;) Ok. One less.

n represent any value. It's just giving an idea of the overall complexity. 
Here, at the end, the call will cost something like O(n log n). It's given to 
compare against O(n²) or O(log n). It's not a specific value.

So if the regions are already sorted, then the overall complexity will be G log 
(G) which we can write also as O(n log n) where n here reprensent the number of 
gets in the request.

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, 
 HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, 
 HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, 
 HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-2611) Handle RS that fails while processing the failure of another one

2013-01-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-2611:
--

Attachment: 2611-trunk-v3.patch

Patch v3 fixes the javadoc warning:
{code}
+   * @return map of peer cluster to log queues 
+   */
+  public SortedMapString, SortedSetString 
copyQueuesFromRSUsingMulti(String znode) {
{code}

 Handle RS that fails while processing the failure of another one
 

 Key: HBASE-2611
 URL: https://issues.apache.org/jira/browse/HBASE-2611
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Jean-Daniel Cryans
Assignee: Himanshu Vashishtha
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 2611-trunk-v3.patch, 2611-v3.patch, 
 HBASE-2611-trunk-v2.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch


 HBASE-2223 doesn't manage region servers that fail while doing the transfer 
 of HLogs queues from other region servers that failed. Devise a reliable way 
 to do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562706#comment-13562706
 ] 

Ted Yu commented on HBASE-7669:
---

I noticed this yesterday too.

+1 on patch.

 ROOT region wouldn't  be handled by PRI-IPC-Handler
 ---

 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7669.patch


 RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
 same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562707#comment-13562707
 ] 

Ted Yu commented on HBASE-7571:
---

Integrated to trunk.

Thanks for the patch, Sergey.

Thanks for the review, Devaraj.

 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562711#comment-13562711
 ] 

Ted Yu commented on HBASE-7669:
---

I think the fix is good for 0.94
In trunk, this part of code is gone - see Stack's patch in HBASE-7533.

 ROOT region wouldn't  be handled by PRI-IPC-Handler
 ---

 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7669.patch


 RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
 same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7670) Synchronized operation in CatalogTracker would block handling ZK Event for long time

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562718#comment-13562718
 ] 

Ted Yu commented on HBASE-7670:
---

{code}
-  this.metaAvailable.notifyAll();
{code}
We don't need the above anymore ?

In waitForMeta(), I see:
{code}
metaAvailable.wait(waitTime);
{code}

 Synchronized operation in CatalogTracker would block handling ZK Event for 
 long time
 

 Key: HBASE-7670
 URL: https://issues.apache.org/jira/browse/HBASE-7670
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7670.patch


 We found ZK event not be watched by master for a  long time in our testing.
 It seems one ZK-Event-Handle thread block it.
 Attaching some logs on master
 {code}
 2013-01-16 22:18:55,667 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, 
 2013-01-16 22:18:56,270 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, 
 ...
 2013-01-16 23:55:33,259 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: 
 Retrying
 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
 attempts=100, exceptions:
 at 
 org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:676)
 at org.apache.hadoop.hbase.catalog.MetaReader.get(MetaReader.java:247)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.getRegion(MetaReader.java:349)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.readRegionLocation(MetaReader.java:289)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.getMetaRegionLocation(MetaReader.java:276)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:424)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:489)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:451)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:289)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 2013-01-16 23:55:33,261 WARN 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to handle region 
 transition for server but server is not online
 {code}
 Between 2013-01-16 22:18:56 and 2013-01-16 23:55:33, there is no any logs 
 about handling ZK Event.
 {code}
 this.metaNodeTracker = new MetaNodeTracker(zookeeper, throwableAborter) {
   public void nodeDeleted(String path) {
 if (!path.equals(node)) return;
 ct.resetMetaLocation();
   }
 }
 public void resetMetaLocation() {
 LOG.debug(Current cached META location,  + metaLocation +
   , is not valid, resetting);
 synchronized(this.metaAvailable) {
   this.metaAvailable.set(false);
   this.metaAvailable.notifyAll();
 }
   }
 private AdminProtocol getMetaServerConnection(){
 synchronized (metaAvailable){
 ...
 ServerName newLocation = MetaReader.getMetaRegionLocation(this);
 ...
 }
 }
 {code}
 From the above code, we would found that nodeDeleted() would wait 
 synchronized (metaAvailable) until MetaReader.getMetaRegionLocation(this) 
 done,
 however, getMetaRegionLocation() could be retrying for a long time

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7673) Incorrect error logging when a replication peer is removed

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562723#comment-13562723
 ] 

Hadoop QA commented on HBASE-7673:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566506/HBASE-7673.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.example.TestBulkDeleteProtocol

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4179//console

This message is automatically generated.

 Incorrect error logging when a replication peer is removed
 --

 Key: HBASE-7673
 URL: https://issues.apache.org/jira/browse/HBASE-7673
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
Priority: Minor
 Attachments: HBASE-7673.patch


 When a replication peer is removed (and all goes well), the following error 
 is still logged:
 {noformat}[ERROR][14:14:21,504][ventThread] 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager - 
 The queue we wanted to close is missing peer-state{noformat}
 This is due to a watch being set on the peer-state node under the replication 
 peer node in ZooKeeper, and the ReplicationSource#PeersWatcher doesn't 
 correctly discern between nodes when it gets nodeDeleted messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562738#comment-13562738
 ] 

Hadoop QA commented on HBASE-7122:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566508/HBASE-7122.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.wal.TestHLog

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4180//console

This message is automatically generated.

 Proper warning message when opening a log file with no entries (idle cluster)
 -

 Key: HBASE-7122
 URL: https://issues.apache.org/jira/browse/HBASE-7122
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Affects Versions: 0.94.2
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBase-7122.patch, HBASE-7122.v2.patch


 In case the cluster is idle and the log has rolled (offset to 0), 
 replicationSource tries to open the log and gets an EOF exception. This gets 
 printed after every 10 sec until an entry is inserted in it.
 {code}
 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(487)) - Opening log for replication 
 c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(543)) - 1 Got: 
 java.io.EOFException
   at java.io.DataInputStream.readFully(DataInputStream.java:180)
   at java.io.DataInputStream.readFully(DataInputStream.java:152)
   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1475)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1470)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:55)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
   at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
   at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
 

[jira] [Commented] (HBASE-7673) Incorrect error logging when a replication peer is removed

2013-01-25 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562744#comment-13562744
 ] 

Gabriel Reid commented on HBASE-7673:
-

Hmm, looks like the failed test (in 
org.apache.hadoop.hbase.coprocessor.example.TestBulkDeleteProtocol) seems to be 
an unrelated random test failure.

 Incorrect error logging when a replication peer is removed
 --

 Key: HBASE-7673
 URL: https://issues.apache.org/jira/browse/HBASE-7673
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
Priority: Minor
 Attachments: HBASE-7673.patch


 When a replication peer is removed (and all goes well), the following error 
 is still logged:
 {noformat}[ERROR][14:14:21,504][ventThread] 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager - 
 The queue we wanted to close is missing peer-state{noformat}
 This is due to a watch being set on the peer-state node under the replication 
 peer node in ZooKeeper, and the ReplicationSource#PeersWatcher doesn't 
 correctly discern between nodes when it gets nodeDeleted messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2611) Handle RS that fails while processing the failure of another one

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562753#comment-13562753
 ] 

Hadoop QA commented on HBASE-2611:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566510/2611-trunk-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4181//console

This message is automatically generated.

 Handle RS that fails while processing the failure of another one
 

 Key: HBASE-2611
 URL: https://issues.apache.org/jira/browse/HBASE-2611
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Jean-Daniel Cryans
Assignee: Himanshu Vashishtha
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 2611-trunk-v3.patch, 2611-v3.patch, 
 HBASE-2611-trunk-v2.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch


 HBASE-2223 doesn't manage region servers that fail while doing the transfer 
 of HLogs queues from other region servers that failed. Devise a reliable way 
 to do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2611) Handle RS that fails while processing the failure of another one

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562755#comment-13562755
 ] 

Ted Yu commented on HBASE-2611:
---

Will integrated patch v3 later today if there is no further review comment.

 Handle RS that fails while processing the failure of another one
 

 Key: HBASE-2611
 URL: https://issues.apache.org/jira/browse/HBASE-2611
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Jean-Daniel Cryans
Assignee: Himanshu Vashishtha
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 2611-trunk-v3.patch, 2611-v3.patch, 
 HBASE-2611-trunk-v2.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch


 HBASE-2223 doesn't manage region servers that fail while doing the transfer 
 of HLogs queues from other region servers that failed. Devise a reliable way 
 to do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562774#comment-13562774
 ] 

Hudson commented on HBASE-7571:
---

Integrated in HBase-TRUNK #3795 (See 
[https://builds.apache.org/job/HBase-TRUNK/3795/])
HBASE-7571 add the notion of per-table or per-column family configuration 
(Sergey) (Revision 1438527)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/hbase.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* /hbase/trunk/hbase-server/src/main/ruby/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java


 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562806#comment-13562806
 ] 

stack commented on HBASE-7669:
--

Woah... I tripped over this yesterday too messing w/ hbase-7533.

+1 on patch and I'd think it belongs in 0.94 too.  Good one [~zjushch]

 ROOT region wouldn't  be handled by PRI-IPC-Handler
 ---

 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7669.patch


 RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
 same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7673) Incorrect error logging when a replication peer is removed

2013-01-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562805#comment-13562805
 ] 

stack commented on HBASE-7673:
--

+1

[~jdcryans] Ok by you sir?

 Incorrect error logging when a replication peer is removed
 --

 Key: HBASE-7673
 URL: https://issues.apache.org/jira/browse/HBASE-7673
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 0.96.0
Reporter: Gabriel Reid
Priority: Minor
 Attachments: HBASE-7673.patch


 When a replication peer is removed (and all goes well), the following error 
 is still logged:
 {noformat}[ERROR][14:14:21,504][ventThread] 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager - 
 The queue we wanted to close is missing peer-state{noformat}
 This is due to a watch being set on the peer-state node under the replication 
 peer node in ZooKeeper, and the ReplicationSource#PeersWatcher doesn't 
 correctly discern between nodes when it gets nodeDeleted messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7669:
-

Release Note: Make it so -ROOT- related operations are treated as high 
priority by QoS, just like .META. ops.

 ROOT region wouldn't  be handled by PRI-IPC-Handler
 ---

 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7669.patch


 RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
 same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7669) ROOT region wouldn't be handled by PRI-IPC-Handler

2013-01-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562809#comment-13562809
 ] 

ramkrishna.s.vasudevan commented on HBASE-7669:
---

+1 on patch.

 ROOT region wouldn't  be handled by PRI-IPC-Handler
 ---

 Key: HBASE-7669
 URL: https://issues.apache.org/jira/browse/HBASE-7669
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7669.patch


 RPC reuqest about ROOT region should be handled by PRI-IPC-Handler, just the 
 same as META region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6770) Allow scanner setCaching to specify size instead of number of rows

2013-01-25 Thread Karthik Ranganathan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562811#comment-13562811
 ] 

Karthik Ranganathan commented on HBASE-6770:


Hey Terry, we're not working actively on the trunk port... 
[~saint@gmail.com] would be able to tell you if some is. If you are 
interested in trying to port the patch, I can definitely help out with reviews.

 Allow scanner setCaching to specify size instead of number of rows
 --

 Key: HBASE-6770
 URL: https://issues.apache.org/jira/browse/HBASE-6770
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Karthik Ranganathan
Assignee: Chen Jin

 Currently, we have the following api's to customize the behavior of scans:
 setCaching() - how many rows to cache on client to speed up scans
 setBatch() - max columns per row to return per row to prevent a very large 
 response.
 Ideally, we should be able to specify a memory buffer size because:
 1. that would take care of both of these use cases.
 2. it does not need any knowledge of the size of the rows or cells, as the 
 final thing we are worried about is the available memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7478) Create a multi-threaded responder

2013-01-25 Thread Karthik Ranganathan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562817#comment-13562817
 ] 

Karthik Ranganathan commented on HBASE-7478:


Interesting... I thought the processResponse(..., false) does not write to the 
channel when there are a lot of writes, only the processResponse(..., true) 
variant does. So in effect we are only single threaded when we are pumping out 
a lot of info using multiple connections.

 Create a multi-threaded responder
 -

 Key: HBASE-7478
 URL: https://issues.apache.org/jira/browse/HBASE-7478
 Project: HBase
  Issue Type: Sub-task
Reporter: Karthik Ranganathan

 Currently, we have multi-threaded readers and handlers, but a single threaded 
 responder which is a bottleneck.
 ipc.server.reader.count  : number of reader threads to read data off the wire
 ipc.server.handler.count : number of handler threads that process the request
 We need to have the ability to specify a ipc.server.responder.count to be 
 able to specify the number of responder threads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7221) RowKey utility class for rowkey construction

2013-01-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562830#comment-13562830
 ] 

Nick Dimiduk commented on HBASE-7221:
-

The obvious stateless parser would model existing Java's Regex APIs: compile 
your format string and then use a Parser to consume the byte[]. There may be a 
more clever approach but, as you say, no one has volunteered any ideas. To the 
point both you and Lars made, a stateful implementation is likely faster, but 
then I have to assume the presence of wisdom in the C-wielding database 
implementers of old who chose the stateless approach for such things.

I believe as you do that this kind of functionality should be packaged with 
HBase. Until I have opportunity to produce an alternate patch for 
consideration, I'll revoke my -1 from the approach of your implementation. 
However, I maintain the -1 regarding the nits I pointed out.

 RowKey utility class for rowkey construction
 

 Key: HBASE-7221
 URL: https://issues.apache.org/jira/browse/HBASE-7221
 Project: HBase
  Issue Type: Improvement
Reporter: Doug Meil
Assignee: Doug Meil
Priority: Minor
 Attachments: HBASE_7221.patch, hbase-common_hbase_7221_2.patch, 
 hbase-common_hbase_7221_v3.patch


 A common question in the dist-lists is how to construct rowkeys, particularly 
 composite keys.  Put/Get/Scan specifies byte[] as the rowkey, but it's up to 
 you to sensibly populate that byte-array, and that's where things tend to go 
 off the rails.
 The intent of this RowKey utility class isn't meant to add functionality into 
 Put/Get/Scan, but rather make it simpler for folks to construct said arrays.  
 Example:
 {code}
RowKey key = RowKey.create(RowKey.SIZEOF_MD5_HASH + RowKey.SIZEOF_LONG);
key.addHash(a);
key.add(b);
byte bytes[] = key.getBytes();
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7670) Synchronized operation in CatalogTracker would block handling ZK Event for long time

2013-01-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562842#comment-13562842
 ] 

ramkrishna.s.vasudevan commented on HBASE-7670:
---

Not sure of the implications of this as Ted says.

 Synchronized operation in CatalogTracker would block handling ZK Event for 
 long time
 

 Key: HBASE-7670
 URL: https://issues.apache.org/jira/browse/HBASE-7670
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.96.0

 Attachments: HBASE-7670.patch


 We found ZK event not be watched by master for a  long time in our testing.
 It seems one ZK-Event-Handle thread block it.
 Attaching some logs on master
 {code}
 2013-01-16 22:18:55,667 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, 
 2013-01-16 22:18:56,270 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling 
 transition=RS_ZK_REGION_OPENED, 
 ...
 2013-01-16 23:55:33,259 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: 
 Retrying
 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
 attempts=100, exceptions:
 at 
 org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183)
 at org.apache.hadoop.hbase.client.HTable.get(HTable.java:676)
 at org.apache.hadoop.hbase.catalog.MetaReader.get(MetaReader.java:247)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.getRegion(MetaReader.java:349)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.readRegionLocation(MetaReader.java:289)
 at 
 org.apache.hadoop.hbase.catalog.MetaReader.getMetaRegionLocation(MetaReader.java:276)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:424)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:489)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:451)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:289)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 2013-01-16 23:55:33,261 WARN 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to handle region 
 transition for server but server is not online
 {code}
 Between 2013-01-16 22:18:56 and 2013-01-16 23:55:33, there is no any logs 
 about handling ZK Event.
 {code}
 this.metaNodeTracker = new MetaNodeTracker(zookeeper, throwableAborter) {
   public void nodeDeleted(String path) {
 if (!path.equals(node)) return;
 ct.resetMetaLocation();
   }
 }
 public void resetMetaLocation() {
 LOG.debug(Current cached META location,  + metaLocation +
   , is not valid, resetting);
 synchronized(this.metaAvailable) {
   this.metaAvailable.set(false);
   this.metaAvailable.notifyAll();
 }
   }
 private AdminProtocol getMetaServerConnection(){
 synchronized (metaAvailable){
 ...
 ServerName newLocation = MetaReader.getMetaRegionLocation(this);
 ...
 }
 }
 {code}
 From the above code, we would found that nodeDeleted() would wait 
 synchronized (metaAvailable) until MetaReader.getMetaRegionLocation(this) 
 done,
 however, getMetaRegionLocation() could be retrying for a long time

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2611) Handle RS that fails while processing the failure of another one

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562847#comment-13562847
 ] 

Ted Yu commented on HBASE-2611:
---

[~jdcryans]:
It would be nice if you take a look at Himanshu's patch.

 Handle RS that fails while processing the failure of another one
 

 Key: HBASE-2611
 URL: https://issues.apache.org/jira/browse/HBASE-2611
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Reporter: Jean-Daniel Cryans
Assignee: Himanshu Vashishtha
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: 2611-trunk-v3.patch, 2611-v3.patch, 
 HBASE-2611-trunk-v2.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch


 HBASE-2223 doesn't manage region servers that fail while doing the transfer 
 of HLogs queues from other region servers that failed. Devise a reliable way 
 to do it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-25 Thread Joey Echeverria (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562861#comment-13562861
 ] 

Joey Echeverria commented on HBASE-7633:


I still think a direct metric would be useful here. The issue where I saw this 
was a slowly dying disk caused a few region servers to slow way, way down. The 
client application was hammering HBase with new threads trying to write with no 
back pressure. The writers eventually exhausted the IPC threads on the region 
servers which blocked incoming reads. This situation would have been a bit more 
graceful if we could have alerted on the IPC threads getting exhausted.

 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7305) ZK based Read/Write locks for table operations

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562879#comment-13562879
 ] 

Ted Yu commented on HBASE-7305:
---

The latest patch needs some rebasing:

TYus-MacBook-Pro:trunk tyu$ frej
-rw-r--r--  1 tyu  staff  4581 Jan 25 09:58 
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java.rej
-rw-r--r--  1 tyu  staff  658 Jan 25 09:58 
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java.rej
-rw-r--r--  1 tyu  staff  2819 Jan 25 09:58 
./hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java.rej

Please attach rebased patch here so that hadoop QA can run test suite.

 ZK based Read/Write locks for table operations
 --

 Key: HBASE-7305
 URL: https://issues.apache.org/jira/browse/HBASE-7305
 Project: HBase
  Issue Type: Bug
  Components: Client, master, Zookeeper
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.96.0

 Attachments: hbase-7305_v0.patch, 
 hbase-7305_v1-based-on-curator.patch, hbase-7305_v2.patch, hbase-7305_v4.patch


 This has started as forward porting of HBASE-5494 and HBASE-5991 from the 
 89-fb branch to trunk, but diverged enough to have it's own issue. 
 The idea is to implement a zk based read/write lock per table. Master 
 initiated operations should get the write lock, and region operations (region 
 split, moving, balance?, etc) acquire a shared read lock. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition may lead to snapshot data loss

2013-01-25 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562883#comment-13562883
 ] 

Jesse Yates commented on HBASE-7643:


Looks good to me too. Annoying that this isn't covered by the master not doing 
recursive delete of the directory (which fails if there are things 
underneath)... grr, race conditions. Thanks matteo!

 HFileArchiver.resolveAndArchive() race condition may lead to snapshot data 
 loss
 ---

 Key: HBASE-7643
 URL: https://issues.apache.org/jira/browse/HBASE-7643
 Project: HBase
  Issue Type: Bug
Affects Versions: hbase-6055, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Blocker
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-7653-p4-v0.patch, HBASE-7653-p4-v1.patch, 
 HBASE-7653-p4-v2.patch, HBASE-7653-p4-v3.patch, HBASE-7653-p4-v4.patch, 
 HBASE-7653-p4-v5.patch, HBASE-7653-p4-v6.patch, HBASE-7653-p4-v7.patch


  * The master have an hfile cleaner thread (that is responsible for cleaning 
 the /hbase/.archive dir)
  ** /hbase/.archive/table/region/family/hfile
  ** if the family/region/family directory is empty the cleaner removes it
  * The master can archive files (from another thread, e.g. DeleteTableHandler)
  * The region can archive files (from another server/process, e.g. compaction)
 The simplified file archiving code looks like this:
 {code}
 HFileArchiver.resolveAndArchive(...) {
   // ensure that the archive dir exists
   fs.mkdir(archiveDir);
   // move the file to the archiver
   success = fs.rename(originalPath/fileName, archiveDir/fileName)
   // if the rename is failed, delete the file without archiving
   if (!success) fs.delete(originalPath/fileName);
 }
 {code}
 Since there's no synchronization between HFileArchiver.resolveAndArchive() 
 and the cleaner run (different process, thread, ...) you can end up in the 
 situation where you are moving something in a directory that doesn't exists.
 {code}
 fs.mkdir(archiveDir);
 // HFileCleaner chore starts at this point
 // and the archiveDirectory that we just ensured to be present gets removed.
 // The rename at this point will fail since the parent directory is missing.
 success = fs.rename(originalPath/fileName, archiveDir/fileName)
 {code}
 The bad thing of deleting the file without archiving is that if you've a 
 snapshot that relies on the file to be present, or you've a clone table that 
 relies on that file is that you're losing data.
 Possible solutions
  * Create a ZooKeeper lock, to notify the master (Hey I'm archiving 
 something, wait a bit)
  * Add a RS - Master call to let the master removes files and avoid this 
 kind of situations
  * Avoid to remove empty directories from the archive if the table exists or 
 is not disabled
  * Add a try catch around the fs.rename
 The last one, the easiest one, looks like:
 {code}
 for (int i = 0; i  retries; ++i) {
   // ensure archive directory to be present
   fs.mkdir(archiveDir);
   //  possible race -
   // try to archive file
   success = fs.rename(originalPath/fileName, archiveDir/fileName);
   if (success) break;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7516) Make compaction policy pluggable

2013-01-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562884#comment-13562884
 ] 

Sergey Shelukhin commented on HBASE-7516:
-

latest patch looks good, except the metadata setting can now be cleaned up, 
HBASE-7571 went in

 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Sub-task
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch, trunk-7516_v2.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7516) Make compaction policy pluggable

2013-01-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562888#comment-13562888
 ] 

Sergey Shelukhin commented on HBASE-7516:
-

btw, I don't know if this also needs a committer +1, my +1 is only half +1 :)

 Make compaction policy pluggable
 

 Key: HBASE-7516
 URL: https://issues.apache.org/jira/browse/HBASE-7516
 Project: HBase
  Issue Type: Sub-task
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, 
 HBASE-7516-v2.patch, trunk-7516.patch, trunk-7516_v2.patch


 Currently, the compaction selection is pluggable. It will be great to make 
 the compaction algorithm pluggable too so that we can implement and play with 
 other compaction algorithms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7571:


Release Note: 
From now on, HBase settings used in region/store can be applied on column 
family and table level. For the region, table and then xml settings will be 
applied. For column family (store), column family, then table, then xml 
settings will be applied. Custom metadata for column family still trumps all 
settings.
The settings can be applied in the shell via alter table 't', CONFIGURATION = 
{ 'key' = 'value',  } in a way similar to user metadata, or 
programmatically. 
The key in the above should be the same as an xml config key (e.g. 
hbase.region.some.setting).

 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7674) add shell documentation for HBASE-7571

2013-01-25 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-7674:
---

 Summary: add shell documentation for HBASE-7571
 Key: HBASE-7674
 URL: https://issues.apache.org/jira/browse/HBASE-7674
 Project: HBase
  Issue Type: Sub-task
  Components: shell
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
Priority: Minor


When the patch was split from HBASE-7236, shell documentation (e.g. how to use 
the new thing and examples) fell thru the cracks. Need to add it...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7571:


Release Note: 
From now on, HBase settings used in region/store can be applied on column 
family and table level. For the region, table and then xml settings will be 
applied (i.e. table settings override xml settings). For column family 
(store), column family, then table, then xml settings will be applied. Custom 
metadata for column family still trumps all settings.
The settings can be applied in the shell via alter table 't', CONFIGURATION = 
{ 'key' = 'value',  } in a way similar to user metadata, or 
programmatically. 
The key in the above should be the same as an xml config key (e.g. 
hbase.region.some.setting).

  was:
From now on, HBase settings used in region/store can be applied on column 
family and table level. For the region, table and then xml settings will be 
applied (e.g. table settings override xml settings). For column family 
(store), column family, then table, then xml settings will be applied. Custom 
metadata for column family still trumps all settings.
The settings can be applied in the shell via alter table 't', CONFIGURATION = 
{ 'key' = 'value',  } in a way similar to user metadata, or 
programmatically. 
The key in the above should be the same as an xml config key (e.g. 
hbase.region.some.setting).


 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7571:


Release Note: 
From now on, HBase settings used in region/store can be applied on column 
family and table level. For the region, table and then xml settings will be 
applied (e.g. table settings override xml settings). For column family 
(store), column family, then table, then xml settings will be applied. Custom 
metadata for column family still trumps all settings.
The settings can be applied in the shell via alter table 't', CONFIGURATION = 
{ 'key' = 'value',  } in a way similar to user metadata, or 
programmatically. 
The key in the above should be the same as an xml config key (e.g. 
hbase.region.some.setting).

  was:
From now on, HBase settings used in region/store can be applied on column 
family and table level. For the region, table and then xml settings will be 
applied. For column family (store), column family, then table, then xml 
settings will be applied. Custom metadata for column family still trumps all 
settings.
The settings can be applied in the shell via alter table 't', CONFIGURATION = 
{ 'key' = 'value',  } in a way similar to user metadata, or 
programmatically. 
The key in the above should be the same as an xml config key (e.g. 
hbase.region.some.setting).


 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5930) Periodically flush the Memstore?

2013-01-25 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-5930:
---

Attachment: 5930-2.1.patch

In this patch, I added a random sleep (upto the two minutes) before flushes. 
Changed the default flush interval to 10 minutes.

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Devaraj Das
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5930-1.patch, 5930-2.1.patch, 5930-wip.patch


 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5930) Periodically flush the Memstore?

2013-01-25 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-5930:
---

Status: Patch Available  (was: Open)

Trying hudson

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Devaraj Das
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5930-1.patch, 5930-2.1.patch, 5930-wip.patch


 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7560) TestCompactionState failures

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562931#comment-13562931
 ] 

Ted Yu commented on HBASE-7560:
---

The test still fails occasionally.
Recent one was in trunk build #3797:

  
testMinorCompactionOnFamily(org.apache.hadoop.hbase.regionserver.TestCompactionState):
 test timed out after 6 milliseconds


 TestCompactionState failures
 

 Key: HBASE-7560
 URL: https://issues.apache.org/jira/browse/HBASE-7560
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0, 0.94.4
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 0.96.0

 Attachments: HBASE-7560-v0.patch


 The TestCompactionState has a fixed waitTime for the compaction state, and on 
 a busy jenkins those tests fails.
 {code}
 java.lang.AssertionError: expected:NONE but was:MAJOR
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:147)
   at 
 org.apache.hadoop.hbase.regionserver.TestCompactionState.compaction(TestCompactionState.java:180)
   at 
 org.apache.hadoop.hbase.regionserver.TestCompactionState.testMajorCompaction(TestCompactionState.java:63)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562933#comment-13562933
 ] 

Hadoop QA commented on HBASE-5930:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566533/5930-2.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4182//console

This message is automatically generated.

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Devaraj Das
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5930-1.patch, 5930-2.1.patch, 5930-wip.patch


 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562950#comment-13562950
 ] 

Ted Yu commented on HBASE-7503:
---

@Jean-Marc:
Can you attach latest patch here ?

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v14-trunk.patch, HBASE-7503-v1-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, 
 HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, 
 HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-7503:
---

Attachment: HBASE-7503-v14-trunk.patch

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v14-trunk.patch, HBASE-7503-v1-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, 
 HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, 
 HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-7503:
---

Status: Patch Available  (was: Open)

Update patch attached, improving the performances based on Sergey's and Ted's 
comment. Passing TestFromClientSide3 successfuly.

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v14-trunk.patch, HBASE-7503-v1-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, 
 HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, 
 HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7672) Merging compaction requests in the queue for same store

2013-01-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562974#comment-13562974
 ] 

Sergey Shelukhin commented on HBASE-7672:
-

So this basically discards the compaction request and makes the store 
recalculate one?
Is it safe to call finishRequest on incomplete request from queue?
Also, if the request is dequeued and discarded anyway, what's the point of 
putting it there in the first place?
Maybe it can be generated once before compaction.

bq. existedRequest = ((PriorityCompactionQueue) smallCompactions.getQueue())
1) existingRequest
2) Casts look hacky. If we know the type why not use the type.
Also, PriorityCompactionQueue is not actually used as a field anywhere in the 
patch.

bq. Compcation
typo




 Merging compaction requests in the queue for same store
 ---

 Key: HBASE-7672
 URL: https://issues.apache.org/jira/browse/HBASE-7672
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7672.patch


 With a high write presesure, we could found many compaction requests for same 
 store in the compaction queue.
 I think we could merge compaction requests for same store to increase 
 compaction efficiency greately. It is so in 0.90 version because doing 
 compacting files selection only when executing compaction
 e.g.
 {code}
 SmallCompation active count:1,Queue:
 regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
  storeName=c1, fileCount=4, fileSize=371.1m (212.0m, 53.0m, 53.0m, 53.0m), 
 priority=15, time=56843340270506608
 regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
  storeName=c1, fileCount=4, fileSize=330.4m (171.3m, 53.0m, 53.0m, 53.0m), 
 priority=11, time=56843401092063608
 {code}
 We could merge these two compaction requests

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7572) move metadata settings that duplicate xml config settings to CF/table config in a backward-compatible manner

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7572:


Attachment: HBASE-7572-v0.patch

 move metadata settings that duplicate xml config settings to CF/table config 
 in a backward-compatible manner
 

 Key: HBASE-7572
 URL: https://issues.apache.org/jira/browse/HBASE-7572
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7572-v0.patch


 2nd part of splitting HBASE-7236

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7572) move metadata settings that duplicate xml config settings to CF/table config in a backward-compatible manner

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7572:


Status: Patch Available  (was: Open)

 move metadata settings that duplicate xml config settings to CF/table config 
 in a backward-compatible manner
 

 Key: HBASE-7572
 URL: https://issues.apache.org/jira/browse/HBASE-7572
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7572-v0.patch


 2nd part of splitting HBASE-7236

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7329) remove flush-related records from WAL and make locking more granular

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7329:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

already committed

 remove flush-related records from WAL and make locking more granular
 

 Key: HBASE-7329
 URL: https://issues.apache.org/jira/browse/HBASE-7329
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7329-findbugs.diff, 7329-v7.txt, HBASE-7329-v0.patch, 
 HBASE-7329-v0.patch, HBASE-7329-v0-tmp.patch, HBASE-7329-v1.patch, 
 HBASE-7329-v1.patch, HBASE-7329-v2.patch, HBASE-7329-v3.patch, 
 HBASE-7329-v4.patch, HBASE-7329-v5.patch, HBASE-7329-v6.patch, 
 HBASE-7329-v6.patch


 Comments from many people in HBASE-6466 and HBASE-6980 indicate that flush 
 records in WAL are not useful. If so, they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7011) log rolling and cache flushing should be able to proceed in parallel

2013-01-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13562993#comment-13562993
 ] 

Sergey Shelukhin commented on HBASE-7011:
-

HBASE-7329 was committed. This is now probably a dup

 log rolling and cache flushing should be able to proceed in parallel
 

 Key: HBASE-7011
 URL: https://issues.apache.org/jira/browse/HBASE-7011
 Project: HBase
  Issue Type: Improvement
Reporter: Kannan Muthukkaruppan
Assignee: Kannan Muthukkaruppan

 Today, during a memstore flush (snapshot of memstore + flushing to disk), log 
 rolling cannot happen. This seems like a bad design, and an unnecessary 
 restriction. 
 Possible reasons cited for this in code are:
 (i) maintenance of the lastSeqWritten map.
 (ii) writing a completed-cache-flush marker into the same log before the 
 roll.
 It seems that we can implement a new design for (i) to avoid holding the lock 
 for the entire duration of the flush. And the motivation for (ii) is not even 
 clear. We should reason this out, and make sure we can relax the restriction. 
 [See related discussion in HBASE-6980.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7626) Backport portions of HBASE-7460 to 0.94

2013-01-25 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563000#comment-13563000
 ] 

Gary Helmling commented on HBASE-7626:
--

Posted an initial patch for review: https://reviews.apache.org/r/9112/

 Backport portions of HBASE-7460 to 0.94
 ---

 Key: HBASE-7626
 URL: https://issues.apache.org/jira/browse/HBASE-7626
 Project: HBase
  Issue Type: Sub-task
  Components: Client, IPC/RPC
Reporter: Lars Hofhansl
Priority: Critical
 Fix For: 0.94.5


 Marking critical so it gets in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563007#comment-13563007
 ] 

Hadoop QA commented on HBASE-7503:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12566540/HBASE-7503-v14-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestAdmin

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4183//console

This message is automatically generated.

 Add exists(List) in HTableInterface to allow multiple parallel exists at one 
 time
 -

 Key: HBASE-7503
 URL: https://issues.apache.org/jira/browse/HBASE-7503
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Minor
 Fix For: 0.96.0

 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, 
 HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, 
 HBASE-7503-v13-trunk.patch, HBASE-7503-v13-trunk.patch, 
 HBASE-7503-v14-trunk.patch, HBASE-7503-v1-trunk.patch, 
 HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, 
 HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, 
 HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, 
 HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch

   Original Estimate: 5m
  Remaining Estimate: 5m

 We need to have a Boolean[] exists(ListGet gets) throws IOException method 
 implemented in HTableInterface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7660) Remove HFileV1 code

2013-01-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563026#comment-13563026
 ] 

Andrew Purtell commented on HBASE-7660:
---

Would removing V1 involve more than removing the reader and writer? Would touch 
the abstract classes?

 Remove HFileV1 code
 ---

 Key: HBASE-7660
 URL: https://issues.apache.org/jira/browse/HBASE-7660
 Project: HBase
  Issue Type: Improvement
  Components: hbck, HFile, migration
Reporter: Matt Corgan
 Fix For: 0.96.0


 HFileV1 should be removed from the regionserver because it is somewhat of a 
 drag on development for working on the lower level read paths.  It's an 
 impediment to cleaning up the Store code.
 V1 HFiles ceased to be written in 0.92, but the V1 reader was left in place 
 so users could upgrade from 0.90 to 0.92.  Once all HFiles are compacted in 
 0.92, then the V1 code is no longer needed.  We then decided to leave the V1 
 code in place in 0.94 so users could upgrade directly from 0.90 to 0.94.  The 
 code is still there in trunk but should probably be shown the door.  I see a 
 few options:
 1) just delete the code and tell people to make sure they compact everything 
 using 0.92 or 0.94
 2) create a standalone script that people can run on their 0.92 or 0.94 
 cluster that iterates the filesystem and prints out any v1 files with a 
 message that the user should run a major compaction
 3) add functionality to 0.96.0 (first release, maybe in hbck) that 
 proactively kills v1 files, so that we can be sure there are none when 
 upgrading from 0.96 to 0.98
 4) punt to 0.98 and probably do one of the above options in a year
 I would vote for #1 or #2 which will allow us to have a v1-free 0.96.0.  
 HFileV1 has already survived 2 major release upgrades which i think many 
 would agree is more than enough for a pre-1.0, free product.  If we can 
 remove it in 0.96.0 it will be out of the way to introduce some nice 
 performance improvements in subsequent 0.96.x releases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7672) Merging compaction requests in the queue for same store

2013-01-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563031#comment-13563031
 ] 

Andrew Purtell commented on HBASE-7672:
---

I tried something similar once that instead changed the compaction request 
comparator to only consider region and priority. It worked, but the large 
compaction queues were a symptom of a problem not the problem itself. Decided 
to tackle the problem instead. 

 Merging compaction requests in the queue for same store
 ---

 Key: HBASE-7672
 URL: https://issues.apache.org/jira/browse/HBASE-7672
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Affects Versions: 0.94.4
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.96.0

 Attachments: HBASE-7672.patch


 With a high write presesure, we could found many compaction requests for same 
 store in the compaction queue.
 I think we could merge compaction requests for same store to increase 
 compaction efficiency greately. It is so in 0.90 version because doing 
 compacting files selection only when executing compaction
 e.g.
 {code}
 SmallCompation active count:1,Queue:
 regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
  storeName=c1, fileCount=4, fileSize=371.1m (212.0m, 53.0m, 53.0m, 53.0m), 
 priority=15, time=56843340270506608
 regionName=abctest,90F9AUIPK4YO47W55WS4R8RSKGDFNRYBNB79COYKHNQD9F62G7,1359104485823.f05568c159940b8a72bd84c988388ad3.,
  storeName=c1, fileCount=4, fileSize=330.4m (171.3m, 53.0m, 53.0m, 53.0m), 
 priority=11, time=56843401092063608
 {code}
 We could merge these two compaction requests

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7221) RowKey utility class for rowkey construction

2013-01-25 Thread Doug Meil (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563034#comment-13563034
 ] 

Doug Meil commented on HBASE-7221:
--

Thanks Nick!  Agree on the -1 on the nits, I'll fix that.  I appreciate the 
revocation of the -1 on the approach.

 RowKey utility class for rowkey construction
 

 Key: HBASE-7221
 URL: https://issues.apache.org/jira/browse/HBASE-7221
 Project: HBase
  Issue Type: Improvement
Reporter: Doug Meil
Assignee: Doug Meil
Priority: Minor
 Attachments: HBASE_7221.patch, hbase-common_hbase_7221_2.patch, 
 hbase-common_hbase_7221_v3.patch


 A common question in the dist-lists is how to construct rowkeys, particularly 
 composite keys.  Put/Get/Scan specifies byte[] as the rowkey, but it's up to 
 you to sensibly populate that byte-array, and that's where things tend to go 
 off the rails.
 The intent of this RowKey utility class isn't meant to add functionality into 
 Put/Get/Scan, but rather make it simpler for folks to construct said arrays.  
 Example:
 {code}
RowKey key = RowKey.create(RowKey.SIZEOF_MD5_HASH + RowKey.SIZEOF_LONG);
key.addHash(a);
key.add(b);
byte bytes[] = key.getBytes();
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7572) move metadata settings that duplicate xml config settings to CF/table config in a backward-compatible manner

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563042#comment-13563042
 ] 

Hadoop QA commented on HBASE-7572:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566544/HBASE-7572-v0.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4184//console

This message is automatically generated.

 move metadata settings that duplicate xml config settings to CF/table config 
 in a backward-compatible manner
 

 Key: HBASE-7572
 URL: https://issues.apache.org/jira/browse/HBASE-7572
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7572-v0.patch


 2nd part of splitting HBASE-7236

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563069#comment-13563069
 ] 

Sergey Shelukhin commented on HBASE-7649:
-

Well, the region can often be opened very quickly, so I think it might make 
sense to try to go to destination immediately.
I am aware of some schemes in certain products where the server tries to 
calculate backoff time based on request queue under heavy load, and tells the 
client to back off for that time, but that seems like it's too much of a high 
hanging fruit right now :)

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7649:


Attachment: HBASE-7649-v2.patch

Added commented-out logging config. Will upload to /r/ shortly. I think this 
should be a good patch (incidentally :))

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7649:


Component/s: Client

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7649:


Issue Type: Improvement  (was: Bug)

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563076#comment-13563076
 ] 

Sergey Shelukhin commented on HBASE-7649:
-

https://reviews.apache.org/r/9113/

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563078#comment-13563078
 ] 

Andrew Purtell commented on HBASE-7571:
---

This should be marked resolved as it has been committed, correct?

 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563085#comment-13563085
 ] 

Hadoop QA commented on HBASE-7649:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566563/HBASE-7649-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4185//console

This message is automatically generated.

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7571:
--

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6825) [WINDOWS] Java NIO socket channels does not work with Windows ipv6

2013-01-25 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6825:
-

Attachment: hbase-6825_v4-trunk.patch

Updated the patch with Nicolas' comments. 

 [WINDOWS] Java NIO socket channels does not work with Windows ipv6
 --

 Key: HBASE-6825
 URL: https://issues.apache.org/jira/browse/HBASE-6825
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.94.3, 0.96.0
 Environment: JDK6 on windows for ipv6. 
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-6825_v3-0.94.patch, hbase-6825_v3-trunk.patch, 
 hbase-6825_v4-trunk.patch


 While running the test TestAdmin.testCheckHBaseAvailableClosesConnection(), I 
 noticed that it takes very long, since it sleeps for 2sec * 500, because of 
 zookeeper retries. 
 The root cause of the problem is that ZK uses Java NIO to create 
 ServerSorcket's from ServerSocketChannels. Under windows, the ipv4 and ipv6 
 is implemented independently, and Java seems that it cannot reuse the same 
 socket channel for both ipv4 and ipv6 sockets. We are getting 
 java.net.SocketException: Address family not supported by protocol
 family exceptions. When, ZK client resolves localhost, it gets both v4 
 127.0.0.1 and v6 ::1 address, but the socket channel cannot bind to both v4 
 and v6. 
 The problem is reported as:
 http://bugs.sun.com/view_bug.do?bug_id=6230761
 http://stackoverflow.com/questions/1357091/binding-an-ipv6-server-socket-on-windows
 Although the JDK bug is reported as resolved, I have tested with jdk1.6.0_33 
 without any success. Although JDK7 seems to have fixed this problem. In ZK, 
 we can replace the ClientCnxnSocket implementation from ClientCnxnSocketNIO 
 to a non-NIO one, but I am not sure that would be the way to go.
 Disabling ipv6 resolution of localhost is one other approach. I'll test it 
 to see whether it will be any good. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7649:


Attachment: HBASE-7649-v2.patch

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch, HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7654) Add ListString getCoprocessors() to HTableInterface.

2013-01-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7654:
--

  Component/s: Coprocessors
   Client
Affects Version/s: 0.94.5
   0.96.0

+1

Will commit soon to trunk and 0.94 if no objection.

 Add ListString getCoprocessors() to HTableInterface.
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Attachments: HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7649:


Attachment: HBASE-7649-v2.patch

Sorry for sloppy rebase... test issue (off by one that I fixed elsewhere)

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch, HBASE-7649-v2.patch, HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7654) Add ListString getCoprocessors() to HTableInterface.

2013-01-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563125#comment-13563125
 ] 

Andrew Purtell commented on HBASE-7654:
---

[~lhofhansl] Ping. Trivial and useful API addition.

 Add ListString getCoprocessors() to HTableInterface.
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Attachments: HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6825) [WINDOWS] Java NIO socket channels does not work with Windows ipv6

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563136#comment-13563136
 ] 

Hadoop QA commented on HBASE-6825:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12566566/hbase-6825_v4-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4186//console

This message is automatically generated.

 [WINDOWS] Java NIO socket channels does not work with Windows ipv6
 --

 Key: HBASE-6825
 URL: https://issues.apache.org/jira/browse/HBASE-6825
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.94.3, 0.96.0
 Environment: JDK6 on windows for ipv6. 
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: hbase-6825_v3-0.94.patch, hbase-6825_v3-trunk.patch, 
 hbase-6825_v4-trunk.patch


 While running the test TestAdmin.testCheckHBaseAvailableClosesConnection(), I 
 noticed that it takes very long, since it sleeps for 2sec * 500, because of 
 zookeeper retries. 
 The root cause of the problem is that ZK uses Java NIO to create 
 ServerSorcket's from ServerSocketChannels. Under windows, the ipv4 and ipv6 
 is implemented independently, and Java seems that it cannot reuse the same 
 socket channel for both ipv4 and ipv6 sockets. We are getting 
 java.net.SocketException: Address family not supported by protocol
 family exceptions. When, ZK client resolves localhost, it gets both v4 
 127.0.0.1 and v6 ::1 address, but the socket channel cannot bind to both v4 
 and v6. 
 The problem is reported as:
 http://bugs.sun.com/view_bug.do?bug_id=6230761
 http://stackoverflow.com/questions/1357091/binding-an-ipv6-server-socket-on-windows
 Although the JDK bug is reported as resolved, I have tested with jdk1.6.0_33 
 without any success. Although JDK7 seems to have fixed this problem. In ZK, 
 we can replace the ClientCnxnSocket implementation from ClientCnxnSocketNIO 
 to a non-NIO one, but I am not sure that would be the way to go.
 Disabling ipv6 resolution of localhost is one other approach. I'll test it 
 to see whether it will be any good. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2013-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563163#comment-13563163
 ] 

Ted Yu commented on HBASE-5930:
---

{code}
+  private Random rand = new Random();
{code}
Please use SecureRandom.
{code}
+  } catch (InterruptedException ie){
+//ignore
{code}
Please restore interrupt status.

Should upper bound for the sleep take length of MemStoreFlusher.flushQueue into 
consideration ?
When many FlushQueueEntry's pile up in flushQueue, we may want to wait longer.

Also, the sleep should be bounded by the remaining time w.r.t. 
cacheFlushInterval - we don't want the loop in chore() to outlast 
cacheFlushInterval.

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Devaraj Das
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5930-1.patch, 5930-2.1.patch, 5930-wip.patch


 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6815) [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode

2013-01-25 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6815:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Slavik. 

 [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a 
 single user mode
 

 Key: HBASE-6815
 URL: https://issues.apache.org/jira/browse/HBASE-6815
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Slavik Krassovsky
 Fix For: 0.96.0

 Attachments: hbase-6815_v1.patch, hbase-6815_v2.patch, 
 hbase-6815_v3.patch, hbase-6815_v4.patch, hbase-6815_v4.patch


 Provide .cmd scripts in order to start HBASE on Windows in a single user mode

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7649) client retry timeout doesn't need to do x2 fallback when going to different server

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563180#comment-13563180
 ] 

Hadoop QA commented on HBASE-7649:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566571/HBASE-7649-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces lines longer than 
100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4188//console

This message is automatically generated.

 client retry timeout doesn't need to do x2 fallback when going to different 
 server
 --

 Key: HBASE-7649
 URL: https://issues.apache.org/jira/browse/HBASE-7649
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7649-v0.patch, HBASE-7649-v1.patch, 
 HBASE-7649-v2.patch, HBASE-7649-v2.patch, HBASE-7649-v2.patch


 See HBASE-7520. When we go to server A, get a bunch of failures, then finally 
 learn the region is on B it doesn't make sense to wait for 30 seconds before 
 going to B.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7654) Add ListString getCoprocessors() to HTableInterface.

2013-01-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563190#comment-13563190
 ] 

Lars Hofhansl commented on HBASE-7654:
--

+1
(and +1 for 0.94)

 Add ListString getCoprocessors() to HTableInterface.
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Attachments: HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5930) Periodically flush the Memstore?

2013-01-25 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563193#comment-13563193
 ] 

Lars Hofhansl commented on HBASE-5930:
--

Absolutely not use SecureRandom here. We're not using this to generate 
cryptographics keys, but just some jitter for memstore flush timing, right?
SecureRandom will exhaust your locally generated entropy that is much better 
used in case where it is actually needed (and it can hang - on Linux at least - 
if not enough entropy has been collected)

 Periodically flush the Memstore?
 

 Key: HBASE-5930
 URL: https://issues.apache.org/jira/browse/HBASE-5930
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl
Assignee: Devaraj Das
Priority: Minor
 Fix For: 0.96.0

 Attachments: 5930-1.patch, 5930-2.1.patch, 5930-wip.patch


 A colleague of mine ran into an interesting issue.
 He inserted some data with the WAL disabled, which happened to fit in the 
 aggregate Memstores memory.
 Two weeks later he a had problem with the HDFS cluster, which caused the 
 region servers to abort. He found that his data was lost. Looking at the log 
 we found that the Memstores were not flushed at all during these two weeks.
 Should we have an option to flush memstores periodically. There are obvious 
 downsides to this, like many small storefiles, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers

2013-01-25 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563195#comment-13563195
 ] 

Elliott Clark commented on HBASE-7633:
--

That's exactly what call queue length would have shown.  It would normally show 
0 and then as things get slower the queue length would grow as it approaches 
the max of ~500.

The way I see it having all ipc threads full isn't a bad thing.  If the threads 
are answering requests at the same rate as they are coming in then having all 
the threads answering something is just fine.  The bad part was that they were 
all full and the number of requests waiting to be answered was growing.  hence 
the callQueueLength was what I would look at. 

 Add a metric that tracks the current number of used RPC threads on the 
 regionservers
 

 Key: HBASE-7633
 URL: https://issues.apache.org/jira/browse/HBASE-7633
 Project: HBase
  Issue Type: Improvement
  Components: metrics
Reporter: Joey Echeverria
Assignee: Elliott Clark

 One way to detect that you're hitting a John Wayne disk[1] would be if we 
 could see when region servers exhausted their RPC handlers. This would also 
 be useful when tuning the cluster for your workload to make sure that reads 
 or writes were not starving the other operations out.
 [1] http://hbase.apache.org/book.html#bad.disk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7654) Add ListString getCoprocessors() to HTableInterface.

2013-01-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7654:
--

Attachment: HBASE-7654-v0-0.94.patch

 Add ListString getCoprocessors() to HTableInterface.
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Attachments: HBASE-7654-v0-0.94.patch, HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7654) Add ListString getCoprocessors() to HTableDescriptor

2013-01-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7654:
--

Summary: Add ListString getCoprocessors() to HTableDescriptor  (was: Add 
ListString getCoprocessors() to HTableInterface.)

 Add ListString getCoprocessors() to HTableDescriptor
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Attachments: HBASE-7654-v0-0.94.patch, HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7654) Add ListString getCoprocessors() to HTableDescriptor

2013-01-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7654:
--

   Resolution: Fixed
Fix Version/s: 0.94.5
   0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and 0.94 branch. New unit test in TestHTableDescriptor (and 
the rest) passes locally. Thanks for the patch J-M!

 Add ListString getCoprocessors() to HTableDescriptor
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-7654-v0-0.94.patch, HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7654) Add ListString getCoprocessors() to HTableDescriptor

2013-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563206#comment-13563206
 ] 

Hadoop QA commented on HBASE-7654:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12566581/HBASE-7654-v0-0.94.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/4189//console

This message is automatically generated.

 Add ListString getCoprocessors() to HTableDescriptor
 --

 Key: HBASE-7654
 URL: https://issues.apache.org/jira/browse/HBASE-7654
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors
Affects Versions: 0.96.0, 0.94.5
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
Priority: Critical
 Fix For: 0.96.0, 0.94.5

 Attachments: HBASE-7654-v0-0.94.patch, HBASE-7654-v0-trunk.patch


 Add ListString getCoprocessors() to HTableInterface to retreive the list of 
 coprocessors loaded into this table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6815) [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563213#comment-13563213
 ] 

Hudson commented on HBASE-6815:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #375 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/375/])
HBASE-6815. [WINDOWS] Provide hbase scripts in order to start HBASE on 
Windows in a single user mode. (Slavik Krassovsky) (Revision 1438764)

 Result = FAILURE
enis : 
Files : 
* /hbase/trunk/bin/hbase-config.cmd
* /hbase/trunk/bin/hbase.cmd
* /hbase/trunk/bin/start-hbase.cmd
* /hbase/trunk/bin/stop-hbase.cmd
* /hbase/trunk/conf/hbase-env.cmd


 [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a 
 single user mode
 

 Key: HBASE-6815
 URL: https://issues.apache.org/jira/browse/HBASE-6815
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Slavik Krassovsky
 Fix For: 0.96.0

 Attachments: hbase-6815_v1.patch, hbase-6815_v2.patch, 
 hbase-6815_v3.patch, hbase-6815_v4.patch, hbase-6815_v4.patch


 Provide .cmd scripts in order to start HBASE on Windows in a single user mode

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7571) add the notion of per-table or per-column family configuration

2013-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13563214#comment-13563214
 ] 

Hudson commented on HBASE-7571:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #375 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/375/])
HBASE-7571 add the notion of per-table or per-column family configuration 
(Sergey) (Revision 1438527)

 Result = FAILURE
tedyu : 
Files : 
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/hbase.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* /hbase/trunk/hbase-server/src/main/ruby/hbase.rb
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java


 add the notion of per-table or per-column family configuration
 --

 Key: HBASE-7571
 URL: https://issues.apache.org/jira/browse/HBASE-7571
 Project: HBase
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: 7571-v3.patch, HBASE-7571-v0-based-on-HBASE-7563.patch, 
 HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, 
 HBASE-7571-v2.patch, HBASE-7571-v3.patch


 Main part of split HBASE-7236.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >