[jira] [Commented] (HBASE-11274) More general single-row Condition Mutation

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041769#comment-14041769
 ] 

Hadoop QA commented on HBASE-11274:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12652126/HBASE-11274-trunk-v2.diff
  against trunk revision .
  ATTACHMENT ID: 12652126

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  return rpcCallerFactory. 
newCaller().callWithRetries(callable, this.operationTimeout);
+  public SingleColumnValueCondition(final byte[] row, final byte[] family, 
final byte[] qualifier, final byte[] value) {
+SingleColumnValueCondition condition = new SingleColumnValueCondition(row, 
family, qualifier, value);
+SingleColumnValueCondition condition = new SingleColumnValueCondition(row, 
family, qualifier, CompareOp.GREATER, value);
+SingleColumnValueCondition condition = new SingleColumnValueCondition(row, 
family, qualifier, CompareOp.LESS, value);
+  "dition\022\013\n\003row\030\001 \001(\014\022\016\n\006family\030\002 
\001(\014\022\021\n\tq" +
+  new java.lang.String[] { "Row", "Family", "Qualifier", 
"CompareType", "Comparator", "Name", "SerializedCondition", });
+  result = ((RegionObserver) 
env.getInstance()).preCheckAndMutateAfterRowLock(ctx, condition,
+AuthResult authResult = permissionGranted(OpType.CHECK_AND_MUTATE, user, 
env, condition.getFamilyMap(),
+conditions.addCondition(new SingleColumnValueCondition(ROW, FAMILY, 
QUALIFIER, Bytes.toBytes("value1")));

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9834//console

This message is automatically generated.

> More general single-row Condition Mutation
> --
>
> Key: HBASE-11274
> URL: https://issues.apache.org/jira/browse/HBASE-11274
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HBASE-11274-trunk-v1.diff, HBASE-11274-trunk-v2.diff
>
>
> Currently, the checkAndDelete and checkAndPut interface  only support atomic 
> mutation with single condition. But in actual apps, we need more general 
> condition-mutation that support multi conditions and logical expression with 
> those conditions.
> For example, t

[jira] [Commented] (HBASE-11405) Multiple invocations of hbck in parallel disables balancer permanently

2014-06-23 Thread bharath v (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041750#comment-14041750
 ] 

bharath v commented on HBASE-11405:
---

Related question: Should be disable parallel hbcks permanently? A quick look at 
the code suggests this might cause inconsistencies as well. 

> Multiple invocations of hbck in parallel disables balancer permanently 
> ---
>
> Key: HBASE-11405
> URL: https://issues.apache.org/jira/browse/HBASE-11405
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer, hbck
>Affects Versions: 0.99.0
>Reporter: bharath v
>
> This is because of the following piece of code in hbck
> {code:borderStyle=solid}
>   boolean oldBalancer = admin.setBalancerRunning(false, true);
> try {
>   onlineConsistencyRepair();
> }
> finally {
>   admin.setBalancerRunning(oldBalancer, false);
> }
> {code}
> Newer invocations set oldBalancer to false as it was disabled by previous 
> invocations and this disables balancer permanently unless its manually turned 
> on by the user. Easy to reproduce, just run hbck 100 times in a loop in 2 
> different sessions and you can see that balancer is set to false in the 
> HMaster logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11405) Multiple invocations of hbck in parallel disables balancer permanently

2014-06-23 Thread bharath v (JIRA)
bharath v created HBASE-11405:
-

 Summary: Multiple invocations of hbck in parallel disables 
balancer permanently 
 Key: HBASE-11405
 URL: https://issues.apache.org/jira/browse/HBASE-11405
 Project: HBase
  Issue Type: Bug
  Components: Balancer, hbck
Affects Versions: 0.99.0
Reporter: bharath v


This is because of the following piece of code in hbck

{code:borderStyle=solid}
  boolean oldBalancer = admin.setBalancerRunning(false, true);
try {
  onlineConsistencyRepair();
}
finally {
  admin.setBalancerRunning(oldBalancer, false);
}
{code}

Newer invocations set oldBalancer to false as it was disabled by previous 
invocations and this disables balancer permanently unless its manually turned 
on by the user. Easy to reproduce, just run hbck 100 times in a loop in 2 
different sessions and you can see that balancer is set to false in the HMaster 
logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10885) Support visibility expressions on Deletes

2014-06-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041702#comment-14041702
 ] 

Anoop Sam John commented on HBASE-10885:


Will review today Ram.  Sorry for the delay.

> Support visibility expressions on Deletes
> -
>
> Key: HBASE-10885
> URL: https://issues.apache.org/jira/browse/HBASE-10885
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 
> 10885-org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes-output.txt,
>  HBASE-10885_1.patch, HBASE-10885_2.patch, HBASE-10885_new_tag_type_1.patch, 
> HBASE-10885_new_tag_type_2.patch, HBASE-10885_v1.patch, HBASE-10885_v2.patch, 
> HBASE-10885_v2.patch, HBASE-10885_v2.patch, HBASE-10885_v3.patch, 
> HBASE-10885_v4.patch, HBASE-10885_v5.patch, HBASE-10885_v7.patch
>
>
> Accumulo can specify visibility expressions for delete markers. During 
> compaction the cells covered by the tombstone are determined in part by 
> matching the visibility expression. This is useful for the use case of data 
> set coalescing, where entries from multiple data sets carrying different 
> labels are combined into one common large table. Later, a subset of entries 
> can be conveniently removed using visibility expressions.
> Currently doing the same in HBase would only be possible with a custom 
> coprocessor. Otherwise, a Delete will affect all cells covered by the 
> tombstone regardless of any visibility expression scoping. This is correct 
> behavior in that no data spill is possible, but certainly could be 
> surprising, and is only meant to be transitional. We decided not to support 
> visibility expressions on Deletes to control the complexity of the initial 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11027) Remove kv.isDeleteXX() and related methods and use CellUtil apis.

2014-06-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041699#comment-14041699
 ] 

ramkrishna.s.vasudevan commented on HBASE-11027:


Will commit this unless objection.

> Remove kv.isDeleteXX() and related methods and use CellUtil apis.
> -
>
> Key: HBASE-11027
> URL: https://issues.apache.org/jira/browse/HBASE-11027
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-11027.patch
>
>
> WE have code like 
> {code}
> kv.isLatestTimestamp() && kv.isDeleteType()
> {code}
> We could remove them and use CellUtil.isDeleteType() so that Cells can be 
> directly used instead of Converting a cell to kv.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10885) Support visibility expressions on Deletes

2014-06-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041694#comment-14041694
 ] 

ramkrishna.s.vasudevan commented on HBASE-10885:


Ping!!

> Support visibility expressions on Deletes
> -
>
> Key: HBASE-10885
> URL: https://issues.apache.org/jira/browse/HBASE-10885
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.98.1
>Reporter: Andrew Purtell
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 
> 10885-org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes-output.txt,
>  HBASE-10885_1.patch, HBASE-10885_2.patch, HBASE-10885_new_tag_type_1.patch, 
> HBASE-10885_new_tag_type_2.patch, HBASE-10885_v1.patch, HBASE-10885_v2.patch, 
> HBASE-10885_v2.patch, HBASE-10885_v2.patch, HBASE-10885_v3.patch, 
> HBASE-10885_v4.patch, HBASE-10885_v5.patch, HBASE-10885_v7.patch
>
>
> Accumulo can specify visibility expressions for delete markers. During 
> compaction the cells covered by the tombstone are determined in part by 
> matching the visibility expression. This is useful for the use case of data 
> set coalescing, where entries from multiple data sets carrying different 
> labels are combined into one common large table. Later, a subset of entries 
> can be conveniently removed using visibility expressions.
> Currently doing the same in HBase would only be possible with a custom 
> coprocessor. Otherwise, a Delete will affect all cells covered by the 
> tombstone regardless of any visibility expression scoping. This is correct 
> behavior in that no data spill is possible, but certainly could be 
> surprising, and is only meant to be transitional. We decided not to support 
> visibility expressions on Deletes to control the complexity of the initial 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11400) Edit, consolidate, and update Compression and data encoding docs

2014-06-23 Thread Aleksandr Shulman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041671#comment-14041671
 ] 

Aleksandr Shulman commented on HBASE-11400:
---

Thanks for taking this up, [~misty]. I'll have a look.

> Edit, consolidate, and update Compression and data encoding docs
> 
>
> Key: HBASE-11400
> URL: https://issues.apache.org/jira/browse/HBASE-11400
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
>Priority: Minor
> Attachments: HBASE-11400.patch
>
>
> Current docs are here: http://hbase.apache.org/book.html#compression.test
> It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11400) Edit, consolidate, and update Compression and data encoding docs

2014-06-23 Thread Aleksandr Shulman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Shulman updated HBASE-11400:
--

  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)
   Summary: Edit, consolidate, and update Compression and data encoding 
docs  (was: Edit, colsolidate, and update Compression and data encoding docs)

> Edit, consolidate, and update Compression and data encoding docs
> 
>
> Key: HBASE-11400
> URL: https://issues.apache.org/jira/browse/HBASE-11400
> Project: HBase
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
>Priority: Minor
> Attachments: HBASE-11400.patch
>
>
> Current docs are here: http://hbase.apache.org/book.html#compression.test
> It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11400) Edit, colsolidate, and update Compression and data encoding docs

2014-06-23 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11400:


Attachment: HBASE-11400.patch

There is some pure speculation here on my part, though I researched and read 
the code the best I could. I split 'codecs' up into 'compressors' and 'data 
block encoders' because it seemed that is how they are really used. Let me know 
what you think. This came about because of a request from [~aleksshulman] so 
maybe he will take a look.

> Edit, colsolidate, and update Compression and data encoding docs
> 
>
> Key: HBASE-11400
> URL: https://issues.apache.org/jira/browse/HBASE-11400
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11400.patch
>
>
> Current docs are here: http://hbase.apache.org/book.html#compression.test
> It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11400) Edit, colsolidate, and update Compression and data encoding docs

2014-06-23 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11400:


Status: Patch Available  (was: Open)

> Edit, colsolidate, and update Compression and data encoding docs
> 
>
> Key: HBASE-11400
> URL: https://issues.apache.org/jira/browse/HBASE-11400
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
> Attachments: HBASE-11400.patch
>
>
> Current docs are here: http://hbase.apache.org/book.html#compression.test
> It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11274) More general single-row Condition Mutation

2014-06-23 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-11274:


Attachment: HBASE-11274-trunk-v2.diff

update for Anoop Sam John 's review

> More general single-row Condition Mutation
> --
>
> Key: HBASE-11274
> URL: https://issues.apache.org/jira/browse/HBASE-11274
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HBASE-11274-trunk-v1.diff, HBASE-11274-trunk-v2.diff
>
>
> Currently, the checkAndDelete and checkAndPut interface  only support atomic 
> mutation with single condition. But in actual apps, we need more general 
> condition-mutation that support multi conditions and logical expression with 
> those conditions.
> For example, to support the following sql
> {quote}
>   insert row  where (column A == 'X' and column B == 'Y') or (column C == 'z')
> {quote}
> Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11400) Edit, colsolidate, and update Compression and data encoding docs

2014-06-23 Thread Misty Stanley-Jones (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misty Stanley-Jones updated HBASE-11400:


Summary: Edit, colsolidate, and update Compression and data encoding docs  
(was: Edit, colsolidate, and update CompressionTest docs)

> Edit, colsolidate, and update Compression and data encoding docs
> 
>
> Key: HBASE-11400
> URL: https://issues.apache.org/jira/browse/HBASE-11400
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Misty Stanley-Jones
>Assignee: Misty Stanley-Jones
>
> Current docs are here: http://hbase.apache.org/book.html#compression.test
> It could use some editing and expansion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11401) Issue with seqNo binding for KV mvcc

2014-06-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041660#comment-14041660
 ] 

Anoop Sam John commented on HBASE-11401:


bq.So it should be appendNoSync() + sync() + memstore order 
Yes.  If we can make this (with out a perf impact) we can get rid of rollback 
in Memstore.

> Issue with seqNo binding for KV mvcc
> 
>
> Key: HBASE-11401
> URL: https://issues.apache.org/jira/browse/HBASE-11401
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0
>
>
> After HBASE-8763, we have combined KV mvcc and HLog seqNo. This is 
> implemented in a tricky way now.
> In HRegion on write path, we first write to memstore and then write to HLog 
> finally sync log. So at the time of write to memstore we dont know the WAL 
> seqNo.  To overcome this, we hold the ref to the KV objects just added to 
> memstore and pass those also to write to wal call. Once the seqNo is 
> obtained, we will reset the mvcc is those KVs with this seqNo.  (While write 
> to memstore we wrote kvs with a very high temp value for mvcc so that 
> concurrent readers wont see them)
> This model works well with the DefaultMemstore.  During the write there wont 
> be any concurrent call to snapshot(). 
> But now we have memstore as a pluggable interface. The above model of late 
> binding assumes that the memstore internal datastructure continue to refer to 
> same java objects. This might not be true always.  Like in HBASE-10713, in 
> btw the kvs can be converted into a CellBlock. If we discontinue to refer to 
> same KV java objects, we will fail in getting the seqNo assigned as kv mvcc.
> If we were doing write and sync to wal and then write to memstore, this would 
> have get solved. But this model we changed (in 94 I believe) for better perf. 
> Under HRegion level lock, we write to memstore and then to wal. Finally out 
> of lock we do the the log sync.  So we can not change it now
> I tried changing the order of ops within the lock (ie. write to log and then 
> to memstore) so that we can get the seqNo when write to memstore. But because 
> of the new HLog write model, we are not guarenteed to get the write to done 
> immediately. 
> One possible way can be add a new API in Log level, to get a next seqNo 
> alone. Call this first and then using which write to memstore and then to wal 
> (using this seqNo).  Just a random thought. Not tried.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11401) Issue with seqNo binding for KV mvcc

2014-06-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041659#comment-14041659
 ] 

Anoop Sam John commented on HBASE-11401:


bq.reordered the write path so that it was appendNoSync + write to memstore + 
sync(). 
Yes I also tried the same last week but failed because of the new disruptor WAL 
sync.

bq.Let me do a perf run of what it would be like if the add-to-memstore came 
after the sync wait.
Thanks. Waiting for your result.  :)
bq.My guess is overall throughput in a PE-type setup would not change much but 
if a hot row, could make a big difference.
Ya I also think so. Let us see..


> Issue with seqNo binding for KV mvcc
> 
>
> Key: HBASE-11401
> URL: https://issues.apache.org/jira/browse/HBASE-11401
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0
>
>
> After HBASE-8763, we have combined KV mvcc and HLog seqNo. This is 
> implemented in a tricky way now.
> In HRegion on write path, we first write to memstore and then write to HLog 
> finally sync log. So at the time of write to memstore we dont know the WAL 
> seqNo.  To overcome this, we hold the ref to the KV objects just added to 
> memstore and pass those also to write to wal call. Once the seqNo is 
> obtained, we will reset the mvcc is those KVs with this seqNo.  (While write 
> to memstore we wrote kvs with a very high temp value for mvcc so that 
> concurrent readers wont see them)
> This model works well with the DefaultMemstore.  During the write there wont 
> be any concurrent call to snapshot(). 
> But now we have memstore as a pluggable interface. The above model of late 
> binding assumes that the memstore internal datastructure continue to refer to 
> same java objects. This might not be true always.  Like in HBASE-10713, in 
> btw the kvs can be converted into a CellBlock. If we discontinue to refer to 
> same KV java objects, we will fail in getting the seqNo assigned as kv mvcc.
> If we were doing write and sync to wal and then write to memstore, this would 
> have get solved. But this model we changed (in 94 I believe) for better perf. 
> Under HRegion level lock, we write to memstore and then to wal. Finally out 
> of lock we do the the log sync.  So we can not change it now
> I tried changing the order of ops within the lock (ie. write to log and then 
> to memstore) so that we can get the seqNo when write to memstore. But because 
> of the new HLog write model, we are not guarenteed to get the write to done 
> immediately. 
> One possible way can be add a new API in Log level, to get a next seqNo 
> alone. Call this first and then using which write to memstore and then to wal 
> (using this seqNo).  Just a random thought. Not tried.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11398) Print the stripes' state with file size info

2014-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041651#comment-14041651
 ] 

Hudson commented on HBASE-11398:


SUCCESS: Integrated in HBase-TRUNK #5227 (See 
[https://builds.apache.org/job/HBase-TRUNK/5227/])
HBASE-11398 Print the stripes' state with file size info (Victor Xu) (tedyu: 
rev 092fc71b182f3c364f5cb3b2e793ef5c78a0c4bc)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java


> Print the stripes' state with file size info
> 
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11376) IntegrationTestBigLinkedList's Generator tool does not generate keys belonging to all regions in a large table.

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041644#comment-14041644
 ] 

Hadoop QA commented on HBASE-11376:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651918/HBASE-11376_3.patch
  against trunk revision .
  ATTACHMENT ID: 12651918

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9833//console

This message is automatically generated.

> IntegrationTestBigLinkedList's Generator tool does not generate keys 
> belonging to all regions in a large table.
> ---
>
> Key: HBASE-11376
> URL: https://issues.apache.org/jira/browse/HBASE-11376
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-11376_1.patch, HBASE-11376_2.patch, 
> HBASE-11376_3.patch
>
>
> When IntegrationTestBigLinkedList's generator tool is used to generate keys 
> to a large table ( 2200 regions), only some regions have keys and others are 
> empty. It would be good to generate keys to all the regions of the table. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11380:
---

Hadoop Flags: Reviewed

Integrated to 0.98 and trunk.

Thanks for the reviews.

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11380-v1.txt, 11380-v2.txt, 11380-v3.txt, 
> HBASE-11380-v2-0.98.3.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: 
> org.apache.hadoop.hbase.errorhandling.ForeignExc

[jira] [Commented] (HBASE-11398) Print the stripes' state with file size info

2014-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041614#comment-14041614
 ] 

Hudson commented on HBASE-11398:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #332 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/332/])
HBASE-11398 Print the stripes' state with file size info (tedyu: rev 
482f7f31101116673c8849d15ab4f589622668e5)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java


> Print the stripes' state with file size info
> 
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11297) Remove some synchros in the rpcServer responder

2014-06-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041607#comment-14041607
 ] 

Andrew Purtell commented on HBASE-11297:


I measure a small trade of throughput for latency on most workloads. I 
piggybacked this on something else so the test was run on 8u20 b17 . I would 
expect very similar results with 7u60.
\\
\\
||Workload A||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100996|100738|
|OVERALL Throughput(ops/sec)|99013|99267|
|READ Operations|4999690|5000163|
|READ AverageLatency(us)|651.6789|651.97|
|READ MinLatency(us)|255|260|
|READ MaxLatency(us)|655882|651083|
|READ 95thPercentileLatency(ms)|0|0|
|READ 99thPercentileLatency(ms) |4|5|
|UPDATE Operations|5000470|496|
|UPDATE AverageLatency(us)|32.80|27.25|
|UPDATE MinLatency(us)|0|0|
|UPDATE MaxLatency(us)|652182|687377|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|
\\
||Workload B||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100536|100480|
|OVERALL Throughput(ops/sec)|99466|99521|
|READ Operations|9499944|9499498|
|READ AverageLatency(us)|599.54|653.32|
|READ MinLatency(us)|259|271|
|READ MaxLatency(us)|710357|727001|
|READ 95thPercentileLatency(ms)|0|1|
|READ 99thPercentileLatency(ms)|3|4|
|UPDATE Operations|500216|500661|
|UPDATE AverageLatency(us)|119.18|145.18|
|UPDATE MinLatency(us)|0|0|
|UPDATE MaxLatency(us)|664203|666895|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|
\\
||Workload C||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100021|100021|
|OVERALL Throughput(ops/sec)|99979|99979|
|READ Operations|100|100|
|READ AverageLatency(us)|543.22|550.31|
|READ MinLatency(us)|256|259|
|READ MaxLatency(us)|735497|721836|
|READ 95thPercentileLatency(ms)|0|0|
|READ 99thPercentileLatency(ms)|4|4|
\\
||Workload D||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|103916|103250|
|OVERALL Throughput(ops/sec)|96237|96854|
|READ Operations|9500043|9499220|
|READ AverageLatency(us)|662.044|674.25|
|READ MinLatency(us)|262|261|
|READ MaxLatency(us)|1054555|742158|
|READ 95thPercentileLatency(ms)|1|0|
|READ 99thPercentileLatency(ms)|4|5|
|INSERT Operations|4999567|500780|
|INSERT AverageLatency(us)|14.38|15.61|
|INSERT MinLatency(us)|4|4|
|INSERT MaxLatency(us)|492058|482944|
|INSERT 95thPercentileLatency(ms)|0|0|
|INSERT 99thPercentileLatency(ms)|0|0|
\\
||Workload E||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|1302270|1309841|
|OVERALL Throughput(ops/sec)|7823||
|INSERT Operations|499441|499751|
|INSERT AverageLatency(us)|18.06|15.86|
|INSERT MinLatency(us)|5|5|
|INSERT MaxLatency(us)|544490|576905|
|INSERT 95thPercentileLatency(ms)|0|0|
|INSERT 99thPercentileLatency(ms)|0|0|
|SCAN Operations|9500559|9500248|
|SCAN AverageLatency(us)|21637.25|21770.74|
|SCAN MinLatency(us)|750|770|
|SCAN MaxLatency(us)|3134120|3399749|
|SCAN 95thPercentileLatency(ms)|124|123|
|SCAN 99thPercentileLatency(ms)|171|174|
\\
||Workload F||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|95554|94002|
|OVERALL Throughput(ops/sec)|92446|92670|
|READ Operations|9333650|950|
|READ AverageLatency(us)|701.67|716.06|
|READ MinLatency(us)|264|272|
|READ MaxLatency(us)|773925|745086|
|READ 95thPercentileLatency(ms)|1|1|
|READ 99thPercentileLatency(ms)|8|7|
|READ-MODIFY-WRITE Operations|4666526|4667535|
|READ-MODIFY-WRITE AverageLatency(us)|706.91|719.93|
|READ-MODIFY-WRITE MinLatency(us)|268|276|
|READ-MODIFY-WRITE MaxLatency(us)|773996|738262|
|READ-MODIFY-WRITE 95thPercentileLatency(ms)|1|1|
|READ-MODIFY-WRITE 99thPercentileLatency(ms)|8|7|
|UPDATE Operations|475|4667684|
|UPDATE AverageLatency(us)|23.40|17.86|
|UPDATE MinLatency(us)|0|1|
|UPDATE MaxLatency(us)|1264590|1261227|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|

> Remove some synchros in the rpcServer responder
> ---
>
> Key: HBASE-11297
> URL: https://issues.apache.org/jira/browse/HBASE-11297
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11297.v1.patch, 11297.v2.patch, 11297.v2.v98.patch, 
> 11297.v3.patch
>
>
> This is on top of another patch that I'm going to put into another jira.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11297) Remove some synchros in the rpcServer responder

2014-06-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041607#comment-14041607
 ] 

Andrew Purtell edited comment on HBASE-11297 at 6/24/14 2:16 AM:
-

I measure a small trade of latency for throughput on most workloads. I 
piggybacked this on something else so the test was run on 8u20 b17 . I would 
expect very similar results with 7u60.
\\
\\
||Workload A||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100996|100738|
|OVERALL Throughput(ops/sec)|99013|99267|
|READ Operations|4999690|5000163|
|READ AverageLatency(us)|651.6789|651.97|
|READ MinLatency(us)|255|260|
|READ MaxLatency(us)|655882|651083|
|READ 95thPercentileLatency(ms)|0|0|
|READ 99thPercentileLatency(ms) |4|5|
|UPDATE Operations|5000470|496|
|UPDATE AverageLatency(us)|32.80|27.25|
|UPDATE MinLatency(us)|0|0|
|UPDATE MaxLatency(us)|652182|687377|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|
\\
||Workload B||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100536|100480|
|OVERALL Throughput(ops/sec)|99466|99521|
|READ Operations|9499944|9499498|
|READ AverageLatency(us)|599.54|653.32|
|READ MinLatency(us)|259|271|
|READ MaxLatency(us)|710357|727001|
|READ 95thPercentileLatency(ms)|0|1|
|READ 99thPercentileLatency(ms)|3|4|
|UPDATE Operations|500216|500661|
|UPDATE AverageLatency(us)|119.18|145.18|
|UPDATE MinLatency(us)|0|0|
|UPDATE MaxLatency(us)|664203|666895|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|
\\
||Workload C||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100021|100021|
|OVERALL Throughput(ops/sec)|99979|99979|
|READ Operations|100|100|
|READ AverageLatency(us)|543.22|550.31|
|READ MinLatency(us)|256|259|
|READ MaxLatency(us)|735497|721836|
|READ 95thPercentileLatency(ms)|0|0|
|READ 99thPercentileLatency(ms)|4|4|
\\
||Workload D||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|103916|103250|
|OVERALL Throughput(ops/sec)|96237|96854|
|READ Operations|9500043|9499220|
|READ AverageLatency(us)|662.044|674.25|
|READ MinLatency(us)|262|261|
|READ MaxLatency(us)|1054555|742158|
|READ 95thPercentileLatency(ms)|1|0|
|READ 99thPercentileLatency(ms)|4|5|
|INSERT Operations|4999567|500780|
|INSERT AverageLatency(us)|14.38|15.61|
|INSERT MinLatency(us)|4|4|
|INSERT MaxLatency(us)|492058|482944|
|INSERT 95thPercentileLatency(ms)|0|0|
|INSERT 99thPercentileLatency(ms)|0|0|
\\
||Workload E||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|1302270|1309841|
|OVERALL Throughput(ops/sec)|7823||
|INSERT Operations|499441|499751|
|INSERT AverageLatency(us)|18.06|15.86|
|INSERT MinLatency(us)|5|5|
|INSERT MaxLatency(us)|544490|576905|
|INSERT 95thPercentileLatency(ms)|0|0|
|INSERT 99thPercentileLatency(ms)|0|0|
|SCAN Operations|9500559|9500248|
|SCAN AverageLatency(us)|21637.25|21770.74|
|SCAN MinLatency(us)|750|770|
|SCAN MaxLatency(us)|3134120|3399749|
|SCAN 95thPercentileLatency(ms)|124|123|
|SCAN 99thPercentileLatency(ms)|171|174|
\\
||Workload F||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|95554|94002|
|OVERALL Throughput(ops/sec)|92446|92670|
|READ Operations|9333650|950|
|READ AverageLatency(us)|701.67|716.06|
|READ MinLatency(us)|264|272|
|READ MaxLatency(us)|773925|745086|
|READ 95thPercentileLatency(ms)|1|1|
|READ 99thPercentileLatency(ms)|8|7|
|READ-MODIFY-WRITE Operations|4666526|4667535|
|READ-MODIFY-WRITE AverageLatency(us)|706.91|719.93|
|READ-MODIFY-WRITE MinLatency(us)|268|276|
|READ-MODIFY-WRITE MaxLatency(us)|773996|738262|
|READ-MODIFY-WRITE 95thPercentileLatency(ms)|1|1|
|READ-MODIFY-WRITE 99thPercentileLatency(ms)|8|7|
|UPDATE Operations|475|4667684|
|UPDATE AverageLatency(us)|23.40|17.86|
|UPDATE MinLatency(us)|0|1|
|UPDATE MaxLatency(us)|1264590|1261227|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|


was (Author: apurtell):
I measure a small trade of throughput for latency on most workloads. I 
piggybacked this on something else so the test was run on 8u20 b17 . I would 
expect very similar results with 7u60.
\\
\\
||Workload A||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100996|100738|
|OVERALL Throughput(ops/sec)|99013|99267|
|READ Operations|4999690|5000163|
|READ AverageLatency(us)|651.6789|651.97|
|READ MinLatency(us)|255|260|
|READ MaxLatency(us)|655882|651083|
|READ 95thPercentileLatency(ms)|0|0|
|READ 99thPercentileLatency(ms) |4|5|
|UPDATE Operations|5000470|496|
|UPDATE AverageLatency(us)|32.80|27.25|
|UPDATE MinLatency(us)|0|0|
|UPDATE MaxLatency(us)|652182|687377|
|UPDATE 95thPercentileLatency(ms)|0|0|
|UPDATE 99thPercentileLatency(ms)|0|0|
\\
||Workload B||0.98.4-SNAPSHOT||0.98.4-SNAPSHOT+11297||
|OVERALL RunTime(ms)|100536|100480|
|OVERALL Throughput(ops/sec)|99466|99521|
|READ Operations|9499944|9499498|
|READ AverageLatency(us)|599.5

[jira] [Commented] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method "getPeer" of the class ReplicationPeersZKImpl

2014-06-23 Thread Qianxi Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041597#comment-14041597
 ] 

Qianxi Zhang commented on HBASE-11388:
--

thanks [~jdcryans]. That is a good idea, and I will do it.

> The order parameter is wrong when invoking the constructor of the 
> ReplicationPeer In the method "getPeer" of the class ReplicationPeersZKImpl
> -
>
> Key: HBASE-11388
> URL: https://issues.apache.org/jira/browse/HBASE-11388
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE_11388.patch
>
>
> The parameters is "Configurationi", "ClusterKey" and "id" in the constructor 
> of the class ReplicationPeer. But he order parameter is "Configurationi", 
> "id" and "ClusterKey" when invoking the constructor of the ReplicationPeer In 
> the method "getPeer" of the class ReplicationPeersZKImpl
> ReplicationPeer#76
> {code}
>   public ReplicationPeer(Configuration conf, String key, String id) throws 
> ReplicationException {
> this.conf = conf;
> this.clusterKey = key;
> this.id = id;
> try {
>   this.reloadZkWatcher();
> } catch (IOException e) {
>   throw new ReplicationException("Error connecting to peer cluster with 
> peerId=" + id, e);
> }
>   }
> {code}
> ReplicationPeersZKImpl#498
> {code}
> ReplicationPeer peer =
> new ReplicationPeer(peerConf, peerId, 
> ZKUtil.getZooKeeperClusterKey(peerConf));
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11398) Print the stripes' state with file size info

2014-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041586#comment-14041586
 ] 

Hudson commented on HBASE-11398:


SUCCESS: Integrated in HBase-0.98 #350 (See 
[https://builds.apache.org/job/HBase-0.98/350/])
HBASE-11398 Print the stripes' state with file size info (tedyu: rev 
482f7f31101116673c8849d15ab4f589622668e5)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFileManager.java


> Print the stripes' state with file size info
> 
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11394) Replication can have data loss if peer id contains hyphen "-"

2014-06-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041584#comment-14041584
 ] 

Enis Soztutar commented on HBASE-11394:
---

Do you have a patch? It should be easy to just do a check in addPeer(). 

> Replication can have data loss if peer id contains hyphen "-"
> -
>
> Key: HBASE-11394
> URL: https://issues.apache.org/jira/browse/HBASE-11394
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 0.99.0, 0.98.4
>
>
> This is an extension to HBASE-8207. It seems that there is no check for the 
> peer id string (which is the short name for the replication peer) format. So 
> in case a peer id containing "-", it will cause data loss silently on server 
> failure. 
> I did not verify the claim via testing though, this is just purely from 
> reading the code. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11394) Replication can have data loss if peer id contains hyphen "-"

2014-06-23 Thread Jieshan Bean (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041580#comment-14041580
 ] 

Jieshan Bean commented on HBASE-11394:
--

 I have the same doubt.  We have added a restriction to the peer-id name in our 
private version. 



> Replication can have data loss if peer id contains hyphen "-"
> -
>
> Key: HBASE-11394
> URL: https://issues.apache.org/jira/browse/HBASE-11394
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 0.99.0, 0.98.4
>
>
> This is an extension to HBASE-8207. It seems that there is no check for the 
> peer id string (which is the short name for the replication peer) format. So 
> in case a peer id containing "-", it will cause data loss silently on server 
> failure. 
> I did not verify the claim via testing though, this is just purely from 
> reading the code. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11401) Issue with seqNo binding for KV mvcc

2014-06-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041577#comment-14041577
 ] 

Enis Soztutar commented on HBASE-11401:
---

bq. If we were doing write and sync to wal and then write to memstore, this 
would have get solved
My initial poc patch at HBASE-8763 actually reordered the write path so that it 
was appendNoSync + write to memstore + sync(). But this was before the late 
binding from the disruptor which actually does the ordering and seqNum 
assignment. So it should be appendNoSync() + sync() + memstore order I guess. 

> Issue with seqNo binding for KV mvcc
> 
>
> Key: HBASE-11401
> URL: https://issues.apache.org/jira/browse/HBASE-11401
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0
>
>
> After HBASE-8763, we have combined KV mvcc and HLog seqNo. This is 
> implemented in a tricky way now.
> In HRegion on write path, we first write to memstore and then write to HLog 
> finally sync log. So at the time of write to memstore we dont know the WAL 
> seqNo.  To overcome this, we hold the ref to the KV objects just added to 
> memstore and pass those also to write to wal call. Once the seqNo is 
> obtained, we will reset the mvcc is those KVs with this seqNo.  (While write 
> to memstore we wrote kvs with a very high temp value for mvcc so that 
> concurrent readers wont see them)
> This model works well with the DefaultMemstore.  During the write there wont 
> be any concurrent call to snapshot(). 
> But now we have memstore as a pluggable interface. The above model of late 
> binding assumes that the memstore internal datastructure continue to refer to 
> same java objects. This might not be true always.  Like in HBASE-10713, in 
> btw the kvs can be converted into a CellBlock. If we discontinue to refer to 
> same KV java objects, we will fail in getting the seqNo assigned as kv mvcc.
> If we were doing write and sync to wal and then write to memstore, this would 
> have get solved. But this model we changed (in 94 I believe) for better perf. 
> Under HRegion level lock, we write to memstore and then to wal. Finally out 
> of lock we do the the log sync.  So we can not change it now
> I tried changing the order of ops within the lock (ie. write to log and then 
> to memstore) so that we can get the seqNo when write to memstore. But because 
> of the new HLog write model, we are not guarenteed to get the write to done 
> immediately. 
> One possible way can be add a new API in Log level, to get a next seqNo 
> alone. Call this first and then using which write to memstore and then to wal 
> (using this seqNo).  Just a random thought. Not tried.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041568#comment-14041568
 ] 

Enis Soztutar commented on HBASE-11380:
---

Let's commit this then. Thanks Craig for reporting. 

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11380-v1.txt, 11380-v2.txt, 11380-v3.txt, 
> HBASE-11380-v2-0.98.3.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Caused by: 
> org.apa

[jira] [Commented] (HBASE-11376) IntegrationTestBigLinkedList's Generator tool does not generate keys belonging to all regions in a large table.

2014-06-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041563#comment-14041563
 ] 

Enis Soztutar commented on HBASE-11376:
---

bq. Should I be changing the title of the JIRA ?
That would be good. 

+1. I'll commit the patch after hadoopqa. Thanks Vandana. 

> IntegrationTestBigLinkedList's Generator tool does not generate keys 
> belonging to all regions in a large table.
> ---
>
> Key: HBASE-11376
> URL: https://issues.apache.org/jira/browse/HBASE-11376
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-11376_1.patch, HBASE-11376_2.patch, 
> HBASE-11376_3.patch
>
>
> When IntegrationTestBigLinkedList's generator tool is used to generate keys 
> to a large table ( 2200 regions), only some regions have keys and others are 
> empty. It would be good to generate keys to all the regions of the table. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11376) IntegrationTestBigLinkedList's Generator tool does not generate keys belonging to all regions in a large table.

2014-06-23 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11376:
--

Status: Patch Available  (was: Open)

> IntegrationTestBigLinkedList's Generator tool does not generate keys 
> belonging to all regions in a large table.
> ---
>
> Key: HBASE-11376
> URL: https://issues.apache.org/jira/browse/HBASE-11376
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-11376_1.patch, HBASE-11376_2.patch, 
> HBASE-11376_3.patch
>
>
> When IntegrationTestBigLinkedList's generator tool is used to generate keys 
> to a large table ( 2200 regions), only some regions have keys and others are 
> empty. It would be good to generate keys to all the regions of the table. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041496#comment-14041496
 ] 

Hudson commented on HBASE-11404:


FAILURE: Integrated in HBase-TRUNK #5226 (See 
[https://builds.apache.org/job/HBase-TRUNK/5226/])
HBASE-11404 TestLogLevel should stop the server at the end (jxiang: rev 
54a5375710960257cce67783eadcc8b5740b99a4)
* hbase-server/src/test/java/org/apache/hadoop/hbase/http/log/TestLogLevel.java


> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Fix For: 0.99.0
>
> Attachments: hbase-11404.patch, hbase-11404_v2.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11102) Document JDK versions supported by each release

2014-06-23 Thread Misty Stanley-Jones (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041407#comment-14041407
 ] 

Misty Stanley-Jones commented on HBASE-11102:
-

Do we want to have a general goal to purge the Ref Guide of pre-0.94 info in 
general, for consistency's sake?

> Document JDK versions supported by each release
> ---
>
> Key: HBASE-11102
> URL: https://issues.apache.org/jira/browse/HBASE-11102
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
> Fix For: 0.99.0
>
> Attachments: HBASE-11102-1.patch, HBASE-11102.patch
>
>
> We can make use of a JDK version x HBase version matrix to explain which JDK 
> version is supported and required. 
> 0.94, 0.96, and 0.98 releases all support JDK6 and JDK7. For 1.0, there is a 
> discussion thread to decide whether to drop JDK6 support. 
> There has been some work to support JDK8. We can also document that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11401) Issue with seqNo binding for KV mvcc

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041377#comment-14041377
 ] 

stack commented on HBASE-11401:
---

I wonder if we could do away w/ the row lock.  It has been speculated before.  
Could we do isolation on the back of MVCC alone?

Let me do a perf run of what it would be like if the add-to-memstore came after 
the sync wait.  My guess is overall throughput in a PE-type setup would not 
change much but if a hot row, could make a big difference.

> Issue with seqNo binding for KV mvcc
> 
>
> Key: HBASE-11401
> URL: https://issues.apache.org/jira/browse/HBASE-11401
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Anoop Sam John
>Priority: Critical
> Fix For: 0.99.0
>
>
> After HBASE-8763, we have combined KV mvcc and HLog seqNo. This is 
> implemented in a tricky way now.
> In HRegion on write path, we first write to memstore and then write to HLog 
> finally sync log. So at the time of write to memstore we dont know the WAL 
> seqNo.  To overcome this, we hold the ref to the KV objects just added to 
> memstore and pass those also to write to wal call. Once the seqNo is 
> obtained, we will reset the mvcc is those KVs with this seqNo.  (While write 
> to memstore we wrote kvs with a very high temp value for mvcc so that 
> concurrent readers wont see them)
> This model works well with the DefaultMemstore.  During the write there wont 
> be any concurrent call to snapshot(). 
> But now we have memstore as a pluggable interface. The above model of late 
> binding assumes that the memstore internal datastructure continue to refer to 
> same java objects. This might not be true always.  Like in HBASE-10713, in 
> btw the kvs can be converted into a CellBlock. If we discontinue to refer to 
> same KV java objects, we will fail in getting the seqNo assigned as kv mvcc.
> If we were doing write and sync to wal and then write to memstore, this would 
> have get solved. But this model we changed (in 94 I believe) for better perf. 
> Under HRegion level lock, we write to memstore and then to wal. Finally out 
> of lock we do the the log sync.  So we can not change it now
> I tried changing the order of ops within the lock (ie. write to log and then 
> to memstore) so that we can get the seqNo when write to memstore. But because 
> of the new HLog write model, we are not guarenteed to get the write to done 
> immediately. 
> One possible way can be add a new API in Log level, to get a next seqNo 
> alone. Call this first and then using which write to memstore and then to wal 
> (using this seqNo).  Just a random thought. Not tried.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11396) Invalid meta entries can lead to unstartable master

2014-06-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041350#comment-14041350
 ] 

Ted Yu commented on HBASE-11396:


lgtm

> Invalid meta entries can lead to unstartable master
> ---
>
> Key: HBASE-11396
> URL: https://issues.apache.org/jira/browse/HBASE-11396
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.3
>Reporter: Craig Condit
> Attachments: HBASE-11396-v1.patch
>
>
> Recently I accidentally kill -9'd all regionservers in my cluster (don't 
> ask.. not my finest moment).
> This let to some corruption of the meta table, causing the following 
> exception to be output on the HBase Master during startup, and culminating 
> with the Master aborting:
> {noformat}
> java.lang.IllegalArgumentException: Wrong length: 13, expected 8
>   at 
> org.apache.hadoop.hbase.util.Bytes.explainWrongLengthOrOffset(Bytes.java:600)
>   at org.apache.hadoop.hbase.util.Bytes.toLong(Bytes.java:578)
>   at 
> org.apache.hadoop.hbase.HRegionInfo.getServerName(HRegionInfo.java:1059)
>   at 
> org.apache.hadoop.hbase.HRegionInfo.getHRegionInfoAndServerName(HRegionInfo.java:987)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.rebuildUserRegions(AssignmentManager.java:2678)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:465)
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:910)
>   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> No tools that I am aware of were able to clean this up. I added a short patch 
> to catch and log this exception and return null from 
> HRegionInfo.getServerName(). This allowed for startup of the cluster, and 
> hbase hbck to repair the damage.
> Creating ticket to submit patch in case anyone else finds this useful.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11398) Print the stripes' state with file size info

2014-06-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11398:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch, Victor.

> Print the stripes' state with file size info
> 
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11398) Print the stripes' state with file size info

2014-06-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11398:
---

Summary: Print the stripes' state with file size info  (was: Print the 
stripes' state with file size info.)

> Print the stripes' state with file size info
> 
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11398) Print the stripes' state with file size info.

2014-06-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11398:
---

Fix Version/s: 0.98.4
   0.99.0
 Hadoop Flags: Reviewed

> Print the stripes' state with file size info.
> -
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Fix For: 0.99.0, 0.98.4
>
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11352) When HMaster starts up it deletes the tmp snapshot directory, if you are exporting a snapshot at that time the job will fail

2014-06-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041325#comment-14041325
 ] 

Ted Yu commented on HBASE-11352:


{code}
+  private static final String SNAPSHOT_INPROGRESS_EXPIRATION_MILLIS_KEY = 
+  "hbase.snapshot.inProgress.expiration.timeMillis";
{code}
Expiration would be measured in days. Maybe use different unit for the above 
config ?

There're long lines in the patch - line length should be 100 or shorter.

> When HMaster starts up it deletes the tmp snapshot directory, if you are 
> exporting a snapshot at that time the job will fail
> 
>
> Key: HBASE-11352
> URL: https://issues.apache.org/jira/browse/HBASE-11352
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Attachments: HBASE-11352-0.94.patch
>
>
> We are exporting a very large table.  The export snapshot job takes 7+ days 
> to complete.  During that time we had to bounce HMaster.  When HMaster 
> initializes, it initializes the SnapshotManager which subsequently deletes 
> the .tmp directory.
> If this happens while the ExportSnapshot job is running the reference files 
> get removed and the job fails.
> Maybe we could put some sort of token such that when this job is running 
> HMaster wont reset the tmp directory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11352) When HMaster starts up it deletes the tmp snapshot directory, if you are exporting a snapshot at that time the job will fail

2014-06-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-11352:
---

Attachment: HBASE-11352-0.94.patch

Created a configuration option for expiration of in progress snapshots, now 
master checks this as well as the modified timestamp for the snapshotInProgress 
and only deletes when the snapshot is expired.  Thought 30 days would be a 
conservative starting value.

This really hurt us when we did an ExportSnapshot that took 1+ week to complete.

> When HMaster starts up it deletes the tmp snapshot directory, if you are 
> exporting a snapshot at that time the job will fail
> 
>
> Key: HBASE-11352
> URL: https://issues.apache.org/jira/browse/HBASE-11352
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Attachments: HBASE-11352-0.94.patch
>
>
> We are exporting a very large table.  The export snapshot job takes 7+ days 
> to complete.  During that time we had to bounce HMaster.  When HMaster 
> initializes, it initializes the SnapshotManager which subsequently deletes 
> the .tmp directory.
> If this happens while the ExportSnapshot job is running the reference files 
> get removed and the job fails.
> Maybe we could put some sort of token such that when this job is running 
> HMaster wont reset the tmp directory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11404:


   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Integrated into trunk. Thanks.

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Fix For: 0.99.0
>
> Attachments: hbase-11404.patch, hbase-11404_v2.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HBASE-11360:
---

Attachment: HBASE-11360-0.96.patch

[~stack] here is the 96 patch.  [~lhofhansl] sorry about not getting you the 98 
patch in time. 

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: 11360-0.98.txt, HBASE-11360-0.94.patch, 
> HBASE-11360-0.96.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041185#comment-14041185
 ] 

stack commented on HBASE-8:
---

[~fs111] Can you pass a property?  compat.module=hbase-hadoop2-compat ?  Or 
hard-code it in the pom?

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11398) Print the stripes' state with file size info.

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041165#comment-14041165
 ] 

Hadoop QA commented on HBASE-11398:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651906/HBASE-11398-v3.patch
  against trunk revision .
  ATTACHMENT ID: 12651906

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9832//console

This message is automatically generated.

> Print the stripes' state with file size info.
> -
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041160#comment-14041160
 ] 

André Kelpe commented on HBASE-8:
-

Sorry, forgot the gradle error:

{code}
 $ gradle cascading-hbase-hadoop2-mr1:build
The TaskContainer.add() method has been deprecated and is scheduled to be 
removed in Gradle 2.0. Please use the create() method instead.
:cascading-hbase-hadoop2-mr1:compileJava

FAILURE: Build failed with an exception.

* What went wrong:
Could not resolve all dependencies for configuration 
':cascading-hbase-hadoop2-mr1:compile'.
> Could not resolve org.apache.hbase:${compat.module}:0.99-CASCADING.
  Required by:
  cascading-hbase:cascading-hbase-hadoop2-mr1:2.5.0-wip-dev > 
org.apache.hbase:hbase-server:0.99-CASCADING
  cascading-hbase:cascading-hbase-hadoop2-mr1:2.5.0-wip-dev > 
org.apache.hbase:hbase-server:0.99-CASCADING > 
org.apache.hbase:hbase-prefix-tree:0.99-CASCADING
   > java.lang.NullPointerException (no error message)
   > java.lang.IllegalArgumentException (no error message)
   > java.lang.IllegalArgumentException (no error message)
   > java.lang.IllegalArgumentException (no error message)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 8.848 secs
{code}

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041156#comment-14041156
 ] 

André Kelpe commented on HBASE-8:
-

Here is what I did:

mvn versions:set -DnewVersion=0.99-Cascading
mvn install -DskipTests site assembly:single -Prelease

I then took the binary tarball to deploy a cluster. Now if I want to rebuild 
the cascading.hbase module, it always fails with an error related to the 
compat.module. It seems the variable hasn't been expanded and that confuses 
gradle:




> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041104#comment-14041104
 ] 

Hadoop QA commented on HBASE-11404:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651995/hbase-11404_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12651995

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestMasterFailover

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9831//console

This message is automatically generated.

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Attachments: hbase-11404.patch, hbase-11404_v2.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041102#comment-14041102
 ] 

Hadoop QA commented on HBASE-11404:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651987/hbase-11404.patch
  against trunk revision .
  ATTACHMENT ID: 12651987

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9829//console

This message is automatically generated.

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Attachments: hbase-11404.patch, hbase-11404_v2.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11380) HRegion lock object is not being released properly, leading to snapshot failure

2014-06-23 Thread Craig Condit (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041093#comment-14041093
 ] 

Craig Condit commented on HBASE-11380:
--

Further update: Installed this patch on our entire cluster, and have not seen 
the issue occur since (~ 48 hours).

> HRegion lock object is not being released properly, leading to snapshot 
> failure
> ---
>
> Key: HBASE-11380
> URL: https://issues.apache.org/jira/browse/HBASE-11380
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.98.3
>Reporter: Craig Condit
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11380-v1.txt, 11380-v2.txt, 11380-v3.txt, 
> HBASE-11380-v2-0.98.3.txt
>
>
> Background:
> We are attempting to create ~ 750 table snapshots on a nightly basis for use 
> in MR jobs. The jobs are run in batches, with a maximum of around 20 jobs 
> running simultaneously.
> We have started to see the following in our region server logs (after < 1 day 
> uptime):
> {noformat}
> java.lang.Error: Maximum lock count exceeded
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.fullTryAcquireShared(ReentrantReadWriteLock.java:531)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryAcquireShared(ReentrantReadWriteLock.java:491)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
>   at 
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:873)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5904)
>   at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:5891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5798)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:5761)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.processRowsWithLocks(HRegion.java:4891)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRowsWithLocks(HRegion.java:4838)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.mutateRow(HRegion.java:4829)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.mutateRows(HRegionServer.java:4390)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3362)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29503)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:168)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:39)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:111)
>   at java.lang.Thread.run(Thread.java:744)
> {noformat}
> Not sure of the cause, but the result is that snapshots cannot be created. We 
> see this in our client logs:
> {noformat}
> Exception in thread "main" 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
> org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
> ss=test-snapshot-20140619143753294 table=test type=FLUSH } had an error.  
> Procedure test-snapshot-20140619143753294 { 
> waiting=[p3plpadata038.internal,60020,1403140682587, 
> p3plpadata056.internal,60020,1403140865123, 
> p3plpadata072.internal,60020,1403141022569] 
> done=[p3plpadata023.internal,60020,1403140552227, 
> p3plpadata009.internal,60020,1403140487826] }
>   at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
>   at 
> org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:2907)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at j

[jira] [Updated] (HBASE-11397) When merging expired stripes, we need to create an empty file to preserve metadata.

2014-06-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-11397:
---

Fix Version/s: 0.98.4
   0.99.0

> When merging expired stripes, we need to create an empty file to preserve 
> metadata.
> ---
>
> Key: HBASE-11397
> URL: https://issues.apache.org/jira/browse/HBASE-11397
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.98.2
> Environment: jdk1.7.0_45, hadoop-cdh5, hbase-0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Fix For: 0.99.0, 0.98.4
>
> Attachments: HBASE-11397-AssertionError.png, HBASE-11397-HDFS.png, 
> HBASE-11397-RS-Log.png, HBASE-11397-Stripe-Info.png, HBASE-11397-v2.patch, 
> HBASE-11397.patch
>
>
> Stripe Compaction is a good feature in 0.96 and 0.98. But when I used it in a 
> heavy-write non-uniform row keys scenario(e.g. time dimension in a key), I 
> came across some problems. 
> I made my stripes split at the size of 2G(hbase.store.stripe.sizeToSplit=2G), 
> and soon there were tens of them. It was true that only the last stripe 
> receiving the new keys kept compacting - old data didn't compact as much, or 
> at all. However, the old stripes were still there when they all expired. I 
> checked the source code and found that when compacting expired stripes, the 
> StoreScanner may return no KVs so that SizeMultiWriter.append() is never 
> called. That's to say, NO NEW FILE WILL BE CREATED. 
> My solution is to create an empty file to preserve metadata at the end of the 
> SizeMultiWriter.commitWritersInternal().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11397) When merging expired stripes, we need to create an empty file to preserve metadata.

2014-06-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041078#comment-14041078
 ] 

Sergey Shelukhin commented on HBASE-11397:
--

Patch looks good to me. If it's not difficult perhaps regression unit test can 
be added?

> When merging expired stripes, we need to create an empty file to preserve 
> metadata.
> ---
>
> Key: HBASE-11397
> URL: https://issues.apache.org/jira/browse/HBASE-11397
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.98.2
> Environment: jdk1.7.0_45, hadoop-cdh5, hbase-0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Attachments: HBASE-11397-AssertionError.png, HBASE-11397-HDFS.png, 
> HBASE-11397-RS-Log.png, HBASE-11397-Stripe-Info.png, HBASE-11397-v2.patch, 
> HBASE-11397.patch
>
>
> Stripe Compaction is a good feature in 0.96 and 0.98. But when I used it in a 
> heavy-write non-uniform row keys scenario(e.g. time dimension in a key), I 
> came across some problems. 
> I made my stripes split at the size of 2G(hbase.store.stripe.sizeToSplit=2G), 
> and soon there were tens of them. It was true that only the last stripe 
> receiving the new keys kept compacting - old data didn't compact as much, or 
> at all. However, the old stripes were still there when they all expired. I 
> checked the source code and found that when compacting expired stripes, the 
> StoreScanner may return no KVs so that SizeMultiWriter.append() is never 
> called. That's to say, NO NEW FILE WILL BE CREATED. 
> My solution is to create an empty file to preserve metadata at the end of the 
> SizeMultiWriter.commitWritersInternal().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11390) PerformanceEvaluation: add an option to use a single connection

2014-06-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041051#comment-14041051
 ] 

Nick Dimiduk commented on HBASE-11390:
--

Yes, I see both calls now. +1.

> PerformanceEvaluation: add an option to use a single connection
> ---
>
> Key: HBASE-11390
> URL: https://issues.apache.org/jira/browse/HBASE-11390
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11390.v1.patch, 11390.v2.patch
>
>
> The PE tool uses one connection per client. It does not match some use cases 
> when we have multiple threads sharing the same connection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11397) When merging expired stripes, we need to create an empty file to preserve metadata.

2014-06-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041046#comment-14041046
 ] 

Nick Dimiduk commented on HBASE-11397:
--

ping [~sershe]

> When merging expired stripes, we need to create an empty file to preserve 
> metadata.
> ---
>
> Key: HBASE-11397
> URL: https://issues.apache.org/jira/browse/HBASE-11397
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Affects Versions: 0.98.2
> Environment: jdk1.7.0_45, hadoop-cdh5, hbase-0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
> Attachments: HBASE-11397-AssertionError.png, HBASE-11397-HDFS.png, 
> HBASE-11397-RS-Log.png, HBASE-11397-Stripe-Info.png, HBASE-11397-v2.patch, 
> HBASE-11397.patch
>
>
> Stripe Compaction is a good feature in 0.96 and 0.98. But when I used it in a 
> heavy-write non-uniform row keys scenario(e.g. time dimension in a key), I 
> came across some problems. 
> I made my stripes split at the size of 2G(hbase.store.stripe.sizeToSplit=2G), 
> and soon there were tens of them. It was true that only the last stripe 
> receiving the new keys kept compacting - old data didn't compact as much, or 
> at all. However, the old stripes were still there when they all expired. I 
> checked the source code and found that when compacting expired stripes, the 
> StoreScanner may return no KVs so that SizeMultiWriter.append() is never 
> called. That's to say, NO NEW FILE WILL BE CREATED. 
> My solution is to create an empty file to preserve metadata at the end of the 
> SizeMultiWriter.commitWritersInternal().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041032#comment-14041032
 ] 

stack commented on HBASE-11354:
---

[~nkeywal] Did you add DelayedClosing?  Was it around keeping up Connections a 
short while in case another connection needed zk else we'd let it go?  You have 
original issue?

> HConnectionImplementation#DelayedClosing does not start
> ---
>
> Key: HBASE-11354
> URL: https://issues.apache.org/jira/browse/HBASE-11354
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
>Priority: Minor
> Attachments: HBASE_11354.patch
>
>
> The method "createAndStart" in class DelayedClosing only creates a instance, 
> but forgets to start it. So thread delayedClosing is not running all the time.
> ConnectionManager#1623
> {code}
>   static DelayedClosing createAndStart(HConnectionImplementation hci){
> Stoppable stoppable = new Stoppable() {
>   private volatile boolean isStopped = false;
>   @Override public void stop(String why) { isStopped = true;}
>   @Override public boolean isStopped() {return isStopped;}
> };
> return new DelayedClosing(hci, stoppable);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11102) Document JDK versions supported by each release

2014-06-23 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041029#comment-14041029
 ] 

Enis Soztutar commented on HBASE-11102:
---

Thanks for improving the docs. 1.0 will support 6,7, and 8, with jdk8 having 
the same not tested enough annotation. 
I would prefer we do not list any versions before 0.94. Nobody should be using 
0.92 and below for anything serious. 

> Document JDK versions supported by each release
> ---
>
> Key: HBASE-11102
> URL: https://issues.apache.org/jira/browse/HBASE-11102
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Enis Soztutar
>Assignee: Misty Stanley-Jones
> Fix For: 0.99.0
>
> Attachments: HBASE-11102-1.patch, HBASE-11102.patch
>
>
> We can make use of a JDK version x HBase version matrix to explain which JDK 
> version is supported and required. 
> 0.94, 0.96, and 0.98 releases all support JDK6 and JDK7. For 1.0, there is a 
> discussion thread to decide whether to drop JDK6 support. 
> There has been some work to support JDK8. We can also document that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11398) Print the stripes' state with file size info.

2014-06-23 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14041010#comment-14041010
 ] 

Nick Dimiduk commented on HBASE-11398:
--

lgtm

> Print the stripes' state with file size info.
> -
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11398) Print the stripes' state with file size info.

2014-06-23 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11398:
-

Assignee: Victor Xu

> Print the stripes' state with file size info.
> -
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11398) Print the stripes' state with file size info.

2014-06-23 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11398:
-

Status: Patch Available  (was: Open)

> Print the stripes' state with file size info.
> -
>
> Key: HBASE-11398
> URL: https://issues.apache.org/jira/browse/HBASE-11398
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.2
>Reporter: Victor Xu
>Assignee: Victor Xu
>Priority: Minor
> Attachments: DebugSample.png, HBASE-11398-v2.patch, 
> HBASE-11398-v3.patch, HBASE-11398.patch
>
>
> Add some hfile size info to the StripeStoreFileManager.debugDumpState().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11403) Fix race conditions around Object#notify

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040978#comment-14040978
 ] 

stack commented on HBASE-11403:
---

duh

Very nice [~nkeywal] +1

> Fix race conditions around Object#notify
> 
>
> Key: HBASE-11403
> URL: https://issues.apache.org/jira/browse/HBASE-11403
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11403.v1.patch
>
>
> We do have some race conditions there. We don't see them fail in the unit 
> tests, because our #wait are bounded. But from a performance point of view, 
> they do occur. I've reviewed them and fix all the issue I found excepted in 
> the AM (haven't reviewed this one, may be it's fine).
> On a perf test, this seems to improve the max latency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11388) The order parameter is wrong when invoking the constructor of the ReplicationPeer In the method "getPeer" of the class ReplicationPeersZKImpl

2014-06-23 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040977#comment-14040977
 ] 

Jean-Daniel Cryans commented on HBASE-11388:


Wow nice one [~qianxiZhang], I'm surprised that things even work but it looks 
like we don't do anything useful with clusterKey in ReplicationPeer, we just 
use the configuration. I think a better interface would be to just remove the 
passing of the clusterKey in ReplicationPeer's constructor since we can infer 
it using ZKUtil#getZooKeeperClusterKey().

> The order parameter is wrong when invoking the constructor of the 
> ReplicationPeer In the method "getPeer" of the class ReplicationPeersZKImpl
> -
>
> Key: HBASE-11388
> URL: https://issues.apache.org/jira/browse/HBASE-11388
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE_11388.patch
>
>
> The parameters is "Configurationi", "ClusterKey" and "id" in the constructor 
> of the class ReplicationPeer. But he order parameter is "Configurationi", 
> "id" and "ClusterKey" when invoking the constructor of the ReplicationPeer In 
> the method "getPeer" of the class ReplicationPeersZKImpl
> ReplicationPeer#76
> {code}
>   public ReplicationPeer(Configuration conf, String key, String id) throws 
> ReplicationException {
> this.conf = conf;
> this.clusterKey = key;
> this.id = id;
> try {
>   this.reloadZkWatcher();
> } catch (IOException e) {
>   throw new ReplicationException("Error connecting to peer cluster with 
> peerId=" + id, e);
> }
>   }
> {code}
> ReplicationPeersZKImpl#498
> {code}
> ReplicationPeer peer =
> new ReplicationPeer(peerConf, peerId, 
> ZKUtil.getZooKeeperClusterKey(peerConf));
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040976#comment-14040976
 ] 

stack commented on HBASE-11404:
---

+1

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Attachments: hbase-11404.patch, hbase-11404_v2.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040973#comment-14040973
 ] 

stack commented on HBASE-11360:
---

Sure on 0.96.

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: 11360-0.98.txt, HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11118) non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-06-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040969#comment-14040969
 ] 

stack commented on HBASE-8:
---

[~fs111] Tell us what to do so testing is easy for you?  I could branch 0.98, 
commit this patch, then push a build to mvn if that would help you.

> non environment variable solution for "IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString"
> --
>
> Key: HBASE-8
> URL: https://issues.apache.org/jira/browse/HBASE-8
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.2
>Reporter: André Kelpe
>Priority: Blocker
> Fix For: 0.99.0
>
> Attachments: 8.bytestringer.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> 1118.suggested.undoing.optimization.on.clientside.txt, 
> HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, shade_attempt.patch
>
>
> I am running into the problem described in 
> https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
> newer version within cascading.hbase 
> (https://github.com/cascading/cascading.hbase).
> One of the features of cascading.hbase is that you can use it from lingual 
> (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
> lingual has a notion of providers, which are fat jars that we pull down 
> dynamically at runtime. Those jars give users the ability to talk to any 
> system or format from SQL. They are added to the classpath  programmatically 
> before we submit jobs to a hadoop cluster.
> Since lingual does not know upfront , which providers are going to be used in 
> a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
> clunky and breaks the ease of use we had before. No other provider requires 
> this right now.
> It would be great to have a programmatical way to fix this, when using fat 
> jars.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040967#comment-14040967
 ] 

Lars Hofhansl commented on HBASE-11360:
---

Trunk's a bit different.

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: 11360-0.98.txt, HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11404:


Attachment: hbase-11404_v2.patch

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Attachments: hbase-11404.patch, hbase-11404_v2.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread Dave Latham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Latham updated HBASE-11360:


Labels:   (was: snap)

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: 11360-0.98.txt, HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread Dave Latham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Latham updated HBASE-11360:


Labels: snap  (was: )

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: 11360-0.98.txt, HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11360:
--

Attachment: 11360-0.98.txt

Here's a 0.98 patch.

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: 11360-0.98.txt, HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10861) Supporting API in ByteRange

2014-06-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10861:
---

Status: Patch Available  (was: Open)

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch, HBASE-10861_2.patch, 
> HBASE-10861_3.patch, HBASE-10861_4.patch, HBASE-10861_6.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10861) Supporting API in ByteRange

2014-06-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10861:
---

Attachment: HBASE-10861_6.patch

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch, HBASE-10861_2.patch, 
> HBASE-10861_3.patch, HBASE-10861_4.patch, HBASE-10861_6.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10861) Supporting API in ByteRange

2014-06-23 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10861:
---

Status: Open  (was: Patch Available)

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch, HBASE-10861_2.patch, 
> HBASE-10861_3.patch, HBASE-10861_4.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11403) Fix race conditions around Object#notify

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040930#comment-14040930
 ] 

Hadoop QA commented on HBASE-11403:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651975/11403.v1.patch
  against trunk revision .
  ATTACHMENT ID: 12651975

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9828//console

This message is automatically generated.

> Fix race conditions around Object#notify
> 
>
> Key: HBASE-11403
> URL: https://issues.apache.org/jira/browse/HBASE-11403
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11403.v1.patch
>
>
> We do have some race conditions there. We don't see them fail in the unit 
> tests, because our #wait are bounded. But from a performance point of view, 
> they do occur. I've reviewed them and fix all the issue I found excepted in 
> the AM (haven't reviewed this one, may be it's fine).
> On a perf test, this seems to improve the max latency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11404:


Attachment: hbase-11404.patch

Added a final block to stop the web server.

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Attachments: hbase-11404.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11404:


Status: Patch Available  (was: Open)

> TestLogLevel should stop the server at the end
> --
>
> Key: HBASE-11404
> URL: https://issues.apache.org/jira/browse/HBASE-11404
> Project: HBase
>  Issue Type: Test
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Trivial
> Attachments: hbase-11404.patch
>
>
> The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11404) TestLogLevel should stop the server at the end

2014-06-23 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11404:
---

 Summary: TestLogLevel should stop the server at the end
 Key: HBASE-11404
 URL: https://issues.apache.org/jira/browse/HBASE-11404
 Project: HBase
  Issue Type: Test
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Trivial


The HttpServer started by the test is not stopped at the end.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11370) SSH doesn't need to scan meta if not using ZK for assignment

2014-06-23 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040892#comment-14040892
 ] 

Jimmy Xiang commented on HBASE-11370:
-

[~jeffreyz], can you take a look the patch?

> SSH doesn't need to scan meta if not using ZK for assignment
> 
>
> Key: HBASE-11370
> URL: https://issues.apache.org/jira/browse/HBASE-11370
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: hbase-11370.patch
>
>
> If we don't use ZK for assignment, the meta content should be the same as 
> that in memory. So we should be able to avoid a meta scan.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11402) Scanner performs redundand datanode requests

2014-06-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040882#comment-14040882
 ] 

Anoop Sam John commented on HBASE-11402:


Oh. Didn't follow up with that issue. Though it was already done. Thanks for 
the correction Lars.

> Scanner performs redundand datanode requests
> 
>
> Key: HBASE-11402
> URL: https://issues.apache.org/jira/browse/HBASE-11402
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, Scanners
>Reporter: Max Lapan
>
> Using hbase 0.94.6 I found duplicate datanode requests of this sort:
> {noformat}
> 2014-06-09 14:12:22,039 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57897, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 35840, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 109928797000
> 2014-06-09 14:12:22,080 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57910, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 0, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 3825
> {noformat}
> After short investigation, I found the source of such behaviour:
> * StoreScanner in constructor calls StoreFileScanner::seek, which (after 
> several levels of calls) is calling HFileBlock::readBlockDataInternal which 
> reads block and pre-reads header of the next block.
> * This pre-readed header is stored in ThreadLocal variable 
> and stream is left in a position right behind the header of next block.
> * After constructor finished, scanner code does scanning, and, after 
> pre-readed block data finished, it calls HFileReaderV2::readNextDataBlock, 
> which again calls HFileBlock::readBlockDataInternal, but this call occured 
> from different thread and there is nothing usefull in ThreadLocal variable
> * Due to this, stream is asked to seek backwards, and this cause duplicate DN 
> request.
> As far as I understood from trunk code, the problem hasn't fixed yet.
> Log of calls with process above:
> {noformat}
> 2014-06-18 14:55:36,616 INFO 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex: loadDataBlockWithScanInfo: 
> entered
> 2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> seekTo: readBlock, ofs = 0, size = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> Before block read: path = 
> hdfs://tsthdp1.p:9000/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal. Ofs = 0, is.pos = 137257042, ondDiskSizeWithHeader = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal: prefetchHeader.ofs = -1, thread = 48
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 24, offset = 0, peekNext = false
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 0, pos = 137257042, blockEnd = 137257229
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: not 
> done, blockEnd = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: 
> readWithStrategy: before seek, pos = 0, blockEnd = -1, currentNode = 
> 10.103.0.73:50010
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockAt: 
> blockEnd updated to 137257229
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: blockSeekTo: 
> loop, target = 0
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: 
> getBlockReader: dn = tsthdp2.p, file = 
> /hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b42bc40aa9e18ac8a4b,
>  bl
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> ofs = 0, len = 24
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> try to read
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> done, len = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 35899, offset = 24, peekNext = true
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 24, pos = 24, blockEnd = 137257229
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: check 
> that we cat skip diff = 0
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: try to 
> fast-forward on diff = 0, pos = 24
> 2014-06-18 14:55:36,641 INFO org.apa

[jira] [Commented] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040870#comment-14040870
 ] 

Dave Latham commented on HBASE-11360:
-

We're going to run it in our own build locally, so it doesn't make a great 
difference to us which version it lands in, just that it does make it upstream.

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11360) SnapshotFileCache refresh logic based on modified directory time might be insufficient

2014-06-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040848#comment-14040848
 ] 

Lars Hofhansl commented on HBASE-11360:
---

[~churromorales], should we delay this to 0.94.22? Just means it'll be a month 
later.

> SnapshotFileCache refresh logic based on modified directory time might be 
> insufficient
> --
>
> Key: HBASE-11360
> URL: https://issues.apache.org/jira/browse/HBASE-11360
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.19
>Reporter: churro morales
> Fix For: 0.99.0, 0.94.21, 0.98.4
>
> Attachments: HBASE-11360-0.94.patch
>
>
> Right now we decide whether to refresh the cache based on the lastModified 
> timestamp of all the snapshots and those "running" snapshots which is located 
> in the /hbase/.hbase-snapshot/.tmp/ directory
> We ran a ExportSnapshot job which takes around 7 minutes between creating the 
> directory and copying all the files. 
> Thus the modified time for the 
> /hbase/.hbase-snapshot/.tmp directory was 7 minutes earlier than the modified 
> time of the
> /hbase/.hbase-snapshot/.tmp/ directory
> Thus the cache refresh happens and doesn't pick up all the files but thinks 
> its up to date as the modified time of the .tmp directory never changes.
> This is a bug as when the export job starts the cache never contains the 
> files for the "running" snapshot and will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11402) Scanner performs redundand datanode requests

2014-06-23 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040846#comment-14040846
 ] 

Lars Hofhansl commented on HBASE-11402:
---

Not yet fixed. See here: HBASE-10676. I guess I dropped that ball on that one.

> Scanner performs redundand datanode requests
> 
>
> Key: HBASE-11402
> URL: https://issues.apache.org/jira/browse/HBASE-11402
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, Scanners
>Reporter: Max Lapan
>
> Using hbase 0.94.6 I found duplicate datanode requests of this sort:
> {noformat}
> 2014-06-09 14:12:22,039 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57897, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 35840, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 109928797000
> 2014-06-09 14:12:22,080 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57910, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 0, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 3825
> {noformat}
> After short investigation, I found the source of such behaviour:
> * StoreScanner in constructor calls StoreFileScanner::seek, which (after 
> several levels of calls) is calling HFileBlock::readBlockDataInternal which 
> reads block and pre-reads header of the next block.
> * This pre-readed header is stored in ThreadLocal variable 
> and stream is left in a position right behind the header of next block.
> * After constructor finished, scanner code does scanning, and, after 
> pre-readed block data finished, it calls HFileReaderV2::readNextDataBlock, 
> which again calls HFileBlock::readBlockDataInternal, but this call occured 
> from different thread and there is nothing usefull in ThreadLocal variable
> * Due to this, stream is asked to seek backwards, and this cause duplicate DN 
> request.
> As far as I understood from trunk code, the problem hasn't fixed yet.
> Log of calls with process above:
> {noformat}
> 2014-06-18 14:55:36,616 INFO 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex: loadDataBlockWithScanInfo: 
> entered
> 2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> seekTo: readBlock, ofs = 0, size = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> Before block read: path = 
> hdfs://tsthdp1.p:9000/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal. Ofs = 0, is.pos = 137257042, ondDiskSizeWithHeader = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal: prefetchHeader.ofs = -1, thread = 48
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 24, offset = 0, peekNext = false
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 0, pos = 137257042, blockEnd = 137257229
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: not 
> done, blockEnd = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: 
> readWithStrategy: before seek, pos = 0, blockEnd = -1, currentNode = 
> 10.103.0.73:50010
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockAt: 
> blockEnd updated to 137257229
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: blockSeekTo: 
> loop, target = 0
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: 
> getBlockReader: dn = tsthdp2.p, file = 
> /hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b42bc40aa9e18ac8a4b,
>  bl
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> ofs = 0, len = 24
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> try to read
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> done, len = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 35899, offset = 24, peekNext = true
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 24, pos = 24, blockEnd = 137257229
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: check 
> that we cat skip diff = 0
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: try to 
> fast-forward on diff = 0, pos = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSCli

[jira] [Updated] (HBASE-11403) Fix race conditions around Object#notify

2014-06-23 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11403:


Status: Patch Available  (was: Open)

> Fix race conditions around Object#notify
> 
>
> Key: HBASE-11403
> URL: https://issues.apache.org/jira/browse/HBASE-11403
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.98.3, 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11403.v1.patch
>
>
> We do have some race conditions there. We don't see them fail in the unit 
> tests, because our #wait are bounded. But from a performance point of view, 
> they do occur. I've reviewed them and fix all the issue I found excepted in 
> the AM (haven't reviewed this one, may be it's fine).
> On a perf test, this seems to improve the max latency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11403) Fix race conditions around Object#notify

2014-06-23 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11403:


Attachment: 11403.v1.patch

> Fix race conditions around Object#notify
> 
>
> Key: HBASE-11403
> URL: https://issues.apache.org/jira/browse/HBASE-11403
> Project: HBase
>  Issue Type: Bug
>  Components: Client, regionserver
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0, 0.98.4
>
> Attachments: 11403.v1.patch
>
>
> We do have some race conditions there. We don't see them fail in the unit 
> tests, because our #wait are bounded. But from a performance point of view, 
> they do occur. I've reviewed them and fix all the issue I found excepted in 
> the AM (haven't reviewed this one, may be it's fine).
> On a perf test, this seems to improve the max latency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11403) Fix race conditions around Object#notify

2014-06-23 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-11403:
---

 Summary: Fix race conditions around Object#notify
 Key: HBASE-11403
 URL: https://issues.apache.org/jira/browse/HBASE-11403
 Project: HBase
  Issue Type: Bug
  Components: Client, regionserver
Affects Versions: 0.98.3, 0.99.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0, 0.98.4


We do have some race conditions there. We don't see them fail in the unit 
tests, because our #wait are bounded. But from a performance point of view, 
they do occur. I've reviewed them and fix all the issue I found excepted in the 
AM (haven't reviewed this one, may be it's fine).

On a perf test, this seems to improve the max latency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start

2014-06-23 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040742#comment-14040742
 ] 

Nicolas Liochon commented on HBASE-11354:
-

Yes, if we:
 - close immediately the connection to ZK, it's likely a performance issue (w/ 
all the recent changes may be it's not true anymore, but I doubt it)
 - don't close the connection to ZK, it's a scalability issue (typically with a 
load of mapReduce clients).

:-)

> HConnectionImplementation#DelayedClosing does not start
> ---
>
> Key: HBASE-11354
> URL: https://issues.apache.org/jira/browse/HBASE-11354
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
>Priority: Minor
> Attachments: HBASE_11354.patch
>
>
> The method "createAndStart" in class DelayedClosing only creates a instance, 
> but forgets to start it. So thread delayedClosing is not running all the time.
> ConnectionManager#1623
> {code}
>   static DelayedClosing createAndStart(HConnectionImplementation hci){
> Stoppable stoppable = new Stoppable() {
>   private volatile boolean isStopped = false;
>   @Override public void stop(String why) { isStopped = true;}
>   @Override public boolean isStopped() {return isStopped;}
> };
> return new DelayedClosing(hci, stoppable);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11354) HConnectionImplementation#DelayedClosing does not start

2014-06-23 Thread Qianxi Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040740#comment-14040740
 ] 

Qianxi Zhang commented on HBASE-11354:
--

thanks [~nkeywal], so how do you think what we should do about this issue? In 
fact, it is a bug, am i right?

> HConnectionImplementation#DelayedClosing does not start
> ---
>
> Key: HBASE-11354
> URL: https://issues.apache.org/jira/browse/HBASE-11354
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
>Priority: Minor
> Attachments: HBASE_11354.patch
>
>
> The method "createAndStart" in class DelayedClosing only creates a instance, 
> but forgets to start it. So thread delayedClosing is not running all the time.
> ConnectionManager#1623
> {code}
>   static DelayedClosing createAndStart(HConnectionImplementation hci){
> Stoppable stoppable = new Stoppable() {
>   private volatile boolean isStopped = false;
>   @Override public void stop(String why) { isStopped = true;}
>   @Override public boolean isStopped() {return isStopped;}
> };
> return new DelayedClosing(hci, stoppable);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10964) Delete mutation is not consistent with Put wrt timestamp

2014-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040725#comment-14040725
 ] 

Hudson commented on HBASE-10964:


FAILURE: Integrated in HBase-TRUNK #5225 (See 
[https://builds.apache.org/job/HBase-TRUNK/5225/])
HBASE-11382 Adding unit test for HBASE-10964 (Delete mutation is not consistent 
with Put wrt timestamp) (Srikanth) (anoopsamjohn: rev 
3020842d5c4a7c32045c516c8a1d06a9e77688f0)
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestDeleteTimeStamp.java


> Delete mutation is not consistent with Put wrt timestamp
> 
>
> Key: HBASE-10964
> URL: https://issues.apache.org/jira/browse/HBASE-10964
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.99.0, 0.98.2, 0.96.3
>
> Attachments: HBASE-10964.patch
>
>
> We have   Put constructors which take ts param
> eg: Put(byte[] row, long ts)
> When one creates a Put this way and add columns to it, without giving a 
> specific TS, these individual cells will honour this Put object's TS. One can 
> use add API which takes a TS and so can override TS for this Cell.
> For delete also we have similar constructors with and without TS params and 
> delete***() APIs same way as add(). But delete***() APIs (without taking a 
> specific TS) is not honouring the Delete object's TS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11382) Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp)

2014-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040724#comment-14040724
 ] 

Hudson commented on HBASE-11382:


FAILURE: Integrated in HBase-TRUNK #5225 (See 
[https://builds.apache.org/job/HBase-TRUNK/5225/])
HBASE-11382 Adding unit test for HBASE-10964 (Delete mutation is not consistent 
with Put wrt timestamp) (Srikanth) (anoopsamjohn: rev 
3020842d5c4a7c32045c516c8a1d06a9e77688f0)
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestDeleteTimeStamp.java


> Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put 
> wrt timestamp)
> ---
>
> Key: HBASE-11382
> URL: https://issues.apache.org/jira/browse/HBASE-11382
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-11382.patch, HBASE-11382_v2.patch
>
>
> Adding a small unit test for verifying that delete mutation is honoring 
> timestamp of delete object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11402) Scanner performs redundand datanode requests

2014-06-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040658#comment-14040658
 ] 

Anoop Sam John commented on HBASE-11402:


Already fixed by some other jira. Not remembering the jira id.

> Scanner performs redundand datanode requests
> 
>
> Key: HBASE-11402
> URL: https://issues.apache.org/jira/browse/HBASE-11402
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, Scanners
>Reporter: Max Lapan
>
> Using hbase 0.94.6 I found duplicate datanode requests of this sort:
> {noformat}
> 2014-06-09 14:12:22,039 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57897, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 35840, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 109928797000
> 2014-06-09 14:12:22,080 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57910, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 0, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 3825
> {noformat}
> After short investigation, I found the source of such behaviour:
> * StoreScanner in constructor calls StoreFileScanner::seek, which (after 
> several levels of calls) is calling HFileBlock::readBlockDataInternal which 
> reads block and pre-reads header of the next block.
> * This pre-readed header is stored in ThreadLocal variable 
> and stream is left in a position right behind the header of next block.
> * After constructor finished, scanner code does scanning, and, after 
> pre-readed block data finished, it calls HFileReaderV2::readNextDataBlock, 
> which again calls HFileBlock::readBlockDataInternal, but this call occured 
> from different thread and there is nothing usefull in ThreadLocal variable
> * Due to this, stream is asked to seek backwards, and this cause duplicate DN 
> request.
> As far as I understood from trunk code, the problem hasn't fixed yet.
> Log of calls with process above:
> {noformat}
> 2014-06-18 14:55:36,616 INFO 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex: loadDataBlockWithScanInfo: 
> entered
> 2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> seekTo: readBlock, ofs = 0, size = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> Before block read: path = 
> hdfs://tsthdp1.p:9000/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal. Ofs = 0, is.pos = 137257042, ondDiskSizeWithHeader = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal: prefetchHeader.ofs = -1, thread = 48
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 24, offset = 0, peekNext = false
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 0, pos = 137257042, blockEnd = 137257229
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: not 
> done, blockEnd = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: 
> readWithStrategy: before seek, pos = 0, blockEnd = -1, currentNode = 
> 10.103.0.73:50010
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockAt: 
> blockEnd updated to 137257229
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: blockSeekTo: 
> loop, target = 0
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: 
> getBlockReader: dn = tsthdp2.p, file = 
> /hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b42bc40aa9e18ac8a4b,
>  bl
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> ofs = 0, len = 24
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> try to read
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> done, len = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 35899, offset = 24, peekNext = true
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 24, pos = 24, blockEnd = 137257229
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: check 
> that we cat skip diff = 0
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: try to 
> fast-forward on diff = 0, pos = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: pos

[jira] [Commented] (HBASE-9272) A parallel, unordered scanner

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040651#comment-14040651
 ] 

Hadoop QA commented on HBASE-9272:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12610225/9272-trunk-v4.txt
  against trunk revision .
  ATTACHMENT ID: 12610225

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9827//console

This message is automatically generated.

> A parallel, unordered scanner
> -
>
> Key: HBASE-9272
> URL: https://issues.apache.org/jira/browse/HBASE-9272
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Attachments: 9272-0.94-v2.txt, 9272-0.94-v3.txt, 9272-0.94-v4.txt, 
> 9272-0.94.txt, 9272-trunk-v2.txt, 9272-trunk-v3.txt, 9272-trunk-v3.txt, 
> 9272-trunk-v4.txt, 9272-trunk.txt, ParallelClientScanner.java, 
> ParallelClientScanner.java
>
>
> The contract of ClientScanner is to return rows in sort order. That limits 
> the order in which region can be scanned.
> I propose a simple ParallelScanner that does not have this requirement and 
> queries regions in parallel, return whatever gets returned first.
> This is generally useful for scans that filter a lot of data on the server, 
> or in cases where the client can very quickly react to the returned data.
> I have a simple prototype (doesn't do error handling right, and might be a 
> bit heavy on the synchronization side - it used a BlockingQueue to hand data 
> between the client using the scanner and the threads doing the scanning, it 
> also could potentially starve some scanners long enugh to time out at the 
> server).
> On the plus side, it's only a 130 lines of code. :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9272) A parallel, unordered scanner

2014-06-23 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040650#comment-14040650
 ] 

Jean-Marc Spaggiari commented on HBASE-9272:


Did you come up with a version you are happy with? ;)

> A parallel, unordered scanner
> -
>
> Key: HBASE-9272
> URL: https://issues.apache.org/jira/browse/HBASE-9272
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Attachments: 9272-0.94-v2.txt, 9272-0.94-v3.txt, 9272-0.94-v4.txt, 
> 9272-0.94.txt, 9272-trunk-v2.txt, 9272-trunk-v3.txt, 9272-trunk-v3.txt, 
> 9272-trunk-v4.txt, 9272-trunk.txt, ParallelClientScanner.java, 
> ParallelClientScanner.java
>
>
> The contract of ClientScanner is to return rows in sort order. That limits 
> the order in which region can be scanned.
> I propose a simple ParallelScanner that does not have this requirement and 
> queries regions in parallel, return whatever gets returned first.
> This is generally useful for scans that filter a lot of data on the server, 
> or in cases where the client can very quickly react to the returned data.
> I have a simple prototype (doesn't do error handling right, and might be a 
> bit heavy on the synchronization side - it used a BlockingQueue to hand data 
> between the client using the scanner and the threads doing the scanning, it 
> also could potentially starve some scanners long enugh to time out at the 
> server).
> On the plus side, it's only a 130 lines of code. :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11387) metrics: wrong totalRequestCount

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040611#comment-14040611
 ] 

Hadoop QA commented on HBASE-11387:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651955/11387.v2.patch
  against trunk revision .
  ATTACHMENT ID: 12651955

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9826//console

This message is automatically generated.

> metrics: wrong totalRequestCount
> 
>
> Key: HBASE-11387
> URL: https://issues.apache.org/jira/browse/HBASE-11387
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11387.98.v1.patch, 11387.v1.patch, 11387.v2.patch
>
>
> We have an unit test here, but it tests for greater than instead of equals. 
> So we didn't see that the number was the double of the actual value.
> As well we were not testing the multi case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11387) metrics: wrong totalRequestCount

2014-06-23 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040608#comment-14040608
 ] 

Nicolas Liochon commented on HBASE-11387:
-

v2 is what I will commit for 0.98 if there is no objection. Seems to be a 
caching happening on trunk but not on 0.98. Unrelated to the request count 
anyway.

> metrics: wrong totalRequestCount
> 
>
> Key: HBASE-11387
> URL: https://issues.apache.org/jira/browse/HBASE-11387
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11387.98.v1.patch, 11387.v1.patch, 11387.v2.patch
>
>
> We have an unit test here, but it tests for greater than instead of equals. 
> So we didn't see that the number was the double of the actual value.
> As well we were not testing the multi case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11387) metrics: wrong totalRequestCount

2014-06-23 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-11387:


Attachment: 11387.v2.patch

> metrics: wrong totalRequestCount
> 
>
> Key: HBASE-11387
> URL: https://issues.apache.org/jira/browse/HBASE-11387
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, regionserver
>Affects Versions: 0.99.0, 0.98.3
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11387.98.v1.patch, 11387.v1.patch, 11387.v2.patch
>
>
> We have an unit test here, but it tests for greater than instead of equals. 
> So we didn't see that the number was the double of the actual value.
> As well we were not testing the multi case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11402) Scanner performs redundand datanode requests

2014-06-23 Thread Max Lapan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Lapan updated HBASE-11402:
--

Description: 
Using hbase 0.94.6 I found duplicate datanode requests of this sort:
{noformat}
2014-06-09 14:12:22,039 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/10.103.0.73:50010, dest: /10.103.0.38:57897, bytes: 1056768, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 35840, srvID: 
DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
duration: 109928797000
2014-06-09 14:12:22,080 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/10.103.0.73:50010, dest: /10.103.0.38:57910, bytes: 1056768, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 0, srvID: 
DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
duration: 3825
{noformat}

After short investigation, I found the source of such behaviour:
* StoreScanner in constructor calls StoreFileScanner::seek, which (after 
several levels of calls) is calling HFileBlock::readBlockDataInternal which 
reads block and pre-reads header of the next block.
* This pre-readed header is stored in ThreadLocal variable 
and stream is left in a position right behind the header of next block.
* After constructor finished, scanner code does scanning, and, after pre-readed 
block data finished, it calls HFileReaderV2::readNextDataBlock, which again 
calls HFileBlock::readBlockDataInternal, but this call occured from different 
thread and there is nothing usefull in ThreadLocal variable
* Due to this, stream is asked to seek backwards, and this cause duplicate DN 
request.

As far as I understood from trunk code, the problem hasn't fixed yet.

Log of calls with process above:
{noformat}
2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileBlockIndex: 
loadDataBlockWithScanInfo: entered
2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
seekTo: readBlock, ofs = 0, size = -1
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
Before block read: path = 
hdfs://tsthdp1.p:9000/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
readBlockDataInternal. Ofs = 0, is.pos = 137257042, ondDiskSizeWithHeader = -1
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
readBlockDataInternal: prefetchHeader.ofs = -1, thread = 48
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
FSReaderV2: readAtOffset: size = 24, offset = 0, peekNext = false
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: targetPos 
= 0, pos = 137257042, blockEnd = 137257229
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: not done, 
blockEnd = -1
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: 
readWithStrategy: before seek, pos = 0, blockEnd = -1, currentNode = 
10.103.0.73:50010
2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockAt: 
blockEnd updated to 137257229
2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: blockSeekTo: 
loop, target = 0
2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockReader: 
dn = tsthdp2.p, file = 
/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b42bc40aa9e18ac8a4b,
 bl
2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: ofs 
= 0, len = 24
2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: try 
to read
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
done, len = 24
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
FSReaderV2: readAtOffset: size = 35899, offset = 24, peekNext = true
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: targetPos 
= 24, pos = 24, blockEnd = 137257229
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: check that 
we cat skip diff = 0
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: try to 
fast-forward on diff = 0, pos = 24
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: pos after 
= 24
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: ofs 
= 24, len = 35923
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: try 
to read
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
done, len = 35923
2014-06-18 14:55:36,642 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
Block data read
2014-06-18 14:55:36,642 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
After block read, ms = 25191000
2014-06-18 14:55:36,670 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
readNextDataBlock: ent

[jira] [Updated] (HBASE-11402) Scanner performs redundand datanode requests

2014-06-23 Thread Max Lapan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Lapan updated HBASE-11402:
--

Summary: Scanner performs redundand datanode requests  (was: Scanner 
perform redundand datanode requests)

> Scanner performs redundand datanode requests
> 
>
> Key: HBASE-11402
> URL: https://issues.apache.org/jira/browse/HBASE-11402
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, Scanners
>Reporter: Max Lapan
>
> Using hbase 0.94.6 I found duplicate datanode requests of this sort:
> {noformat}
> 2014-06-09 14:12:22,039 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57897, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 35840, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 109928797000
> 2014-06-09 14:12:22,080 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
> /10.103.0.73:50010, dest: /10.103.0.38:57910, bytes: 1056768, op: HDFS_READ, 
> cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 0, srvID: 
> DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
> BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
> duration: 3825
> {noformat}
> After short investigation, I found the source of such behaviour:
> * StoreScanner in constructor calls StoreFileScanner::seek, which (after 
> several levels of calls) is calling HFileBlock::readBlockDataInternal which 
> reads block and pre-reads header of the next block.
> * This pre-readed header is stored in ThreadLocal variable 
> and stream is left in a position right behind the header of next block.
> * After constructor finished, scanner code does scanning, and, after 
> pre-readed block data finished, it calls HFileReaderV2::readNextDataBlock, 
> which again calls HFileBlock::readBlockDataInternal, but this call occured 
> from different thread and there is nothing usefull in ThreadLocal variable
> * Due to this, stream is asked to seek backwards, and this cause duplicate DN 
> request.
> As far as I understood from trunk code, the problem hasn't fixed yet.
> Log of calls with process above:
> {noformat}
> 2014-06-18 14:55:36,616 INFO 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex: loadDataBlockWithScanInfo: 
> entered
> 2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> seekTo: readBlock, ofs = 0, size = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
> Before block read: path = 
> hdfs://tsthdp1.p:9000/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal. Ofs = 0, is.pos = 137257042, ondDiskSizeWithHeader = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> readBlockDataInternal: prefetchHeader.ofs = -1, thread = 48
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 24, offset = 0, peekNext = false
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 0, pos = 137257042, blockEnd = 137257229
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: not 
> done, blockEnd = -1
> 2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: 
> readWithStrategy: before seek, pos = 0, blockEnd = -1, currentNode = 
> 10.103.0.73:50010
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockAt: 
> blockEnd updated to 137257229
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: blockSeekTo: 
> loop, target = 0
> 2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: 
> getBlockReader: dn = tsthdp2.p, file = 
> /hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b42bc40aa9e18ac8a4b,
>  bl
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> ofs = 0, len = 24
> 2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> try to read
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
> done, len = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
> FSReaderV2: readAtOffset: size = 35899, offset = 24, peekNext = true
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: 
> targetPos = 24, pos = 24, blockEnd = 137257229
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: check 
> that we cat skip diff = 0
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: try to 
> fast-forward on diff = 0, pos = 24
> 2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: pos 
> after = 24
> 20

[jira] [Created] (HBASE-11402) Scanner perform redundand datanode requests

2014-06-23 Thread Max Lapan (JIRA)
Max Lapan created HBASE-11402:
-

 Summary: Scanner perform redundand datanode requests
 Key: HBASE-11402
 URL: https://issues.apache.org/jira/browse/HBASE-11402
 Project: HBase
  Issue Type: Bug
  Components: HFile, Scanners
Reporter: Max Lapan


Using hbase 0.94.6 I found duplicate datanode requests of this sort:
{noformat}
2014-06-09 14:12:22,039 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/10.103.0.73:50010, dest: /10.103.0.38:57897, bytes: 1056768, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 35840, srvID: 
DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
duration: 109928797000
2014-06-09 14:12:22,080 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/10.103.0.73:50010, dest: /10.103.0.38:57910, bytes: 1056768, op: HDFS_READ, 
cliID: DFSClient_NONMAPREDUCE_1702752887_26, offset: 0, srvID: 
DS-504316153-10.103.0.73-50010-1342437562377, blockid: 
BP-404551095-10.103.0.38-1376045452213:blk_3541255952831727320_613837, 
duration: 3825
{noformat}

After short investigation, I found the source of such behaviour:
* StoreScanner in constructor calls StoreFileScanner::seek, which (after 
several levels of calls) is calling HFileBlock::readBlockDataInternal which 
reads block and pre-reads header of the next block.
* This pre-readed header is stored in ThreadLocal variable 
and stream is left in a position right behind the header of next block.
* After constructor finished, scanner code does scanning, and, after pre-readed 
block data finished, it calls HFileReaderV2::readNextDataBlock, which again 
calls HFileBlock::readBlockDataInternal, but this call occured from different 
thread and there is nothing usefull in ThreadLocal variable
* Due to this, stream is asked to seek backwards, and this cause duplicate DN 
request.

As far as I understood from trunk code, the problem hasn't fixed yet.

Log of calls with process above:
{noformat}
2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileBlockIndex: 
loadDataBlockWithScanInfo: entered
2014-06-18 14:55:36,616 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
seekTo: readBlock, ofs = 0, size = -1
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
Before block read: path = 
hdfs://tsthdp1.p:9000/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
readBlockDataInternal. Ofs = 0, is.pos = 137257042, ondDiskSizeWithHeader = -1
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
readBlockDataInternal: prefetchHeader.ofs = -1, thread = 48
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
FSReaderV2: readAtOffset: size = 24, offset = 0, peekNext = false
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: targetPos 
= 0, pos = 137257042, blockEnd = 137257229
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: seek: not done, 
blockEnd = -1
2014-06-18 14:55:36,617 INFO org.apache.hadoop.hdfs.DFSClient: 
readWithStrategy: before seek, pos = 0, blockEnd = -1, currentNode = 
10.103.0.73:50010
2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockAt: 
blockEnd updated to 137257229
2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: blockSeekTo: 
loop, target = 0
2014-06-18 14:55:36,618 INFO org.apache.hadoop.hdfs.DFSClient: getBlockReader: 
dn = tsthdp2.p, file = 
/hbase/webpagesII/ba16051997b1272f00bed5f65094dc63/p/c866b7b0eded4b42bc40aa9e18ac8a4b,
 bl
2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: ofs 
= 0, len = 24
2014-06-18 14:55:36,627 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: try 
to read
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
done, len = 24
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hbase.io.hfile.HFile: 
FSReaderV2: readAtOffset: size = 35899, offset = 24, peekNext = true
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: targetPos 
= 24, pos = 24, blockEnd = 137257229
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: check that 
we cat skip diff = 0
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: try to 
fast-forward on diff = 0, pos = 24
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: seek: pos after 
= 24
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: ofs 
= 24, len = 35923
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: try 
to read
2014-06-18 14:55:36,641 INFO org.apache.hadoop.hdfs.DFSClient: readBuffer: 
done, len = 35923
2014-06-18 14:55:36,642 INFO org.apache.hadoop.hbase.io.hfile.HFileReaderV2: 
Block data read
2014-06-18 14:55:36,642 INFO org.apache.hadoop.hbase.io.

[jira] [Commented] (HBASE-11297) Remove some synchros in the rpcServer responder

2014-06-23 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040576#comment-14040576
 ] 

Nicolas Liochon commented on HBASE-11297:
-

I've tested the v3 on master. Still around 3 to 5% faster on my test (read with 
a single client, 1 to 16 threads), so if Andrew's tests are ok I will commit 
this.

> Remove some synchros in the rpcServer responder
> ---
>
> Key: HBASE-11297
> URL: https://issues.apache.org/jira/browse/HBASE-11297
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11297.v1.patch, 11297.v2.patch, 11297.v2.v98.patch, 
> 11297.v3.patch
>
>
> This is on top of another patch that I'm going to put into another jira.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11027) Remove kv.isDeleteXX() and related methods and use CellUtil apis.

2014-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040524#comment-14040524
 ] 

Hadoop QA commented on HBASE-11027:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651936/HBASE-11027.patch
  against trunk revision .
  ATTACHMENT ID: 12651936

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9825//console

This message is automatically generated.

> Remove kv.isDeleteXX() and related methods and use CellUtil apis.
> -
>
> Key: HBASE-11027
> URL: https://issues.apache.org/jira/browse/HBASE-11027
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.99.0
>
> Attachments: HBASE-11027.patch
>
>
> WE have code like 
> {code}
> kv.isLatestTimestamp() && kv.isDeleteType()
> {code}
> We could remove them and use CellUtil.isDeleteType() so that Cells can be 
> directly used instead of Converting a cell to kv.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10861) Supporting API in ByteRange

2014-06-23 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040515#comment-14040515
 ] 

ramkrishna.s.vasudevan commented on HBASE-10861:


{code}
-SimpleByteRange that = (SimpleByteRange) thatObject;
+SimpleMutableByteRange that = (SimpleMutableByteRange) thatObject;
{code}
I think this change is not needed. It should be SimpleByteRange only. The 
equals in SimpleMutableByteRange does check with SimpleMutableByteRange.
bq.These you want to be exposed APIs?
These were already part of ByteRange interface and did not add them newly.  So 
I think for now we will continue it as it is?
bq.public void clean() throws Exception -> What to do within this?
For DBB allocated byte buffers, we are calling a clean method using reflection. 
So in case we use an offheap backed byte range then we may need to provide a 
way to clean it.  Hence added it.  But that is now added only for the 
AbstractPositionedBR.

> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch, HBASE-10861_2.patch, 
> HBASE-10861_3.patch, HBASE-10861_4.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11382) Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp)

2014-06-23 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11382:
---

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to master branch. Thanks for the patch Srikanth.

> Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put 
> wrt timestamp)
> ---
>
> Key: HBASE-11382
> URL: https://issues.apache.org/jira/browse/HBASE-11382
> Project: HBase
>  Issue Type: Bug
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-11382.patch, HBASE-11382_v2.patch
>
>
> Adding a small unit test for verifying that delete mutation is honoring 
> timestamp of delete object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10861) Supporting API in ByteRange

2014-06-23 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040498#comment-14040498
 ] 

Anoop Sam John commented on HBASE-10861:


{code}
 if (!(thatObject instanceof SimpleByteRange)) {
   return false;
 }
-SimpleByteRange that = (SimpleByteRange) thatObject;
+SimpleMutableByteRange that = (SimpleMutableByteRange) thatObject;
{code}
Code is reachable?  If we have to compare bytes in SimpleByteRange and 
SimpleMutableByteRange and say equal or not, we need add instance of check for 
SimpleMutableByteRange also. Then better have similar way of equals check in 
SimpleMutableByteRange also.

deepCopyToNewArray, deepCopyTo, deepCopySubRangeTo -> These you want to be 
exposed APIs? I think these are used within other exposed APIs. If so consider 
removing them from Interface and make them as protected in AbstractByteRange (?)

public void clean() throws Exception -> What to do within this?



> Supporting API in ByteRange
> ---
>
> Key: HBASE-10861
> URL: https://issues.apache.org/jira/browse/HBASE-10861
> Project: HBase
>  Issue Type: Improvement
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-10861.patch, HBASE-10861_2.patch, 
> HBASE-10861_3.patch, HBASE-10861_4.patch
>
>
> We would need APIs that would 
> setLimit(int limit)
> getLimt()
> asReadOnly()
> These APIs would help in implementations that have Buffers offheap (for now 
> BRs backed by DBB).
> If anything more is needed could be added when needed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11390) PerformanceEvaluation: add an option to use a single connection

2014-06-23 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14040491#comment-14040491
 ] 

Nicolas Liochon commented on HBASE-11390:
-

The tests failure and the findbugs are unrelated. 

> PerformanceEvaluation: add an option to use a single connection
> ---
>
> Key: HBASE-11390
> URL: https://issues.apache.org/jira/browse/HBASE-11390
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.99.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.99.0
>
> Attachments: 11390.v1.patch, 11390.v2.patch
>
>
> The PE tool uses one connection per client. It does not match some use cases 
> when we have multiple threads sharing the same connection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)