[jira] [Commented] (HBASE-6980) Parallel Flushing Of Memstores

2012-10-17 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477647#comment-13477647
 ] 

ramkrishna.s.vasudevan commented on HBASE-6980:
---

bq.#1. It is not clear why we even write a META entry for flushes...
Yes.  This is actually not used but still that forms the latest entry.  So 
currently in 0.94 and trunk uses a map to form the name of the replayedits file 
that should have the seq id of maximum of the edits.  Previously i remember 
that it was minimum of the seq id that was used for naming the replayEdits. 
In one of the issues we were discussing on the usefulness of the meta data 
entry after flush. We can once again verify and we can remove it if there is 
not much usefulness from it.

bq.we track the min seq id from the current memstore instead of the max seq id 
from the snapshot memstore
The HLog keeps track of the minSeqid for the region. So you suggesting that we 
can only track the max seq id whenever an append happens to HLog? So on flush 
start we just clear this entry and use that max value for completing the flush. 
Thanks for the insights.  
 

 Parallel Flushing Of Memstores
 --

 Key: HBASE-6980
 URL: https://issues.apache.org/jira/browse/HBASE-6980
 Project: HBase
  Issue Type: New Feature
Reporter: Kannan Muthukkaruppan
Assignee: Kannan Muthukkaruppan

 For write dominated workloads, single threaded memstore flushing is an 
 unnecessary bottleneck. With a single flusher thread, we are basically not 
 setup to take advantage of the aggregate throughput that multi-disk nodes 
 provide.
 * For puts with WAL enabled, the bottleneck is more likely the single WAL 
 per region server. So this particular fix may not buy as much unless we 
 unlock that bottleneck with multiple commit logs per region server. (Topic 
 for a separate JIRA-- HBASE-6981).
 * But for puts with WAL disabled (e.g., when using HBASE-5783 style fast bulk 
 imports), we should be able to support much better ingest rates with parallel 
 flushing of memstores.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5355) Compressed RPC's for HBase

2012-10-17 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477667#comment-13477667
 ] 

Devaraj Das commented on HBASE-5355:


Just to clarify - trunk doesn't have the SecureRpcEngine stuff (removed it via 
HBASE-5732). The patch on HBASE-6966 is meant to work with both security 
switched ON/OFF. Could we please have a review on the patch for HBASE-6966 
[since 0.96 is going to be a major jump, I guess it makes sense to have this 
feature for 0.96.x only; made the trunk patch comparatively simpler since I 
didn't have to worry about backward compatibility..].

 Compressed RPC's for HBase
 --

 Key: HBASE-5355
 URL: https://issues.apache.org/jira/browse/HBASE-5355
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Affects Versions: 0.89.20100924
Reporter: Karthik Ranganathan
Assignee: Karthik Ranganathan
 Attachments: HBASE-5355-0.94.patch


 Some application need ability to do large batched writes and reads from a 
 remote MR cluster. These eventually get bottlenecked on the network. These 
 results are also pretty compressible sometimes.
 The aim here is to add the ability to do compressed calls to the server on 
 both the send and receive paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6651) Thread safety of HTablePool is doubtful

2012-10-17 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-6651:
-

Attachment: HBASE-6651-V2.patch

Thanks for the reviews. Now I added a revised patch.

(1) In order to pass the tests for HTablePool, I changed both of HTablePool and 
its tests, but I think it keeps backward compatibility from the point of view 
of users.

I changed the behavior of HTablePool.closeTablePool(). The previous behavior is 
vague and differs in its implementations, and I think that is far from 
“shutdown” described in the javadoc. I changed the behavior to not only closing 
idle objects at the moment, but making kind of reservation to close the rest of 
objects when they return into the pool.

Also I removed the package private method HTablePool.getCurrentPoolSize(). Its 
behavior differed in its implementations and exposed implementation details, 
and it was only used in tests and hidden from users because of package private. 
Instead, I created a class implementing HTableInterfaceFactory to count pooled 
objects, and I added the class into tests and fixed the tests.

Fortunately HTableProol doesn’t expose details of the pooling, and the new 
behavior is more eager to close objects than the previous behaviors, then I 
think the new behavior has enough backward compatibility.

I think the tests will pass, but still I have no environment to run the tests.

(2) I created the previous patch with git with the option 
--ignore-space-at-eol, and it seems to cause failure in applying the patch. I 
created this patch without that option.

(3) Fixed line delimiters.

(4) Fixed java doc comments, and fixed to entirely use equals methods for 
pooled objects.
Usually, objects we want to pool are heavy to create and are not replaceable. 
And the operator == is preferable to equals() in order to identify them. But 
there are few collections based on the sameness rather than equality, and 
sticking to the sameness will require extra efforts to create collections based 
on the sameness. Unless we must prepare malicious overriding equals/hashCode, 
the extra effort is fruitless, and I wanted to make do with the notice in the 
javadoc.

(5) Used @InterfaceStability.Evolving

(6) Once you get the pooled object by SharedMap.borrowObject() or register the 
object by SharedMap.registerObject(), you have to declare the end of using the 
borrowed object by calling SharedMap.returnObject() or 
SharedMap.invalidateObject(). On the other hand, you can call 
SharedMap.invalidateObject() and SharedMap.clear() at any place on any thread. 
SharedMap.returnObject() returns false if the pool is already full, or 
invalidateObject() or clear() method has been called explicitly in other place. 

(7) Well, I need more time to study how to use https://reviews.apache.org


 Thread safety of HTablePool is doubtful
 ---

 Key: HBASE-6651
 URL: https://issues.apache.org/jira/browse/HBASE-6651
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.1
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-6651.patch, HBASE-6651-V2.patch, sample.zip, 
 sample.zip, sharedmap_for_hbaseclient.zip


 There are some operations in HTablePool to access to PoolMap in multiple 
 times without any explict synchronization. 
 For example HTablePool.closeTablePool() calles PoolMap.values(), and calles 
 PoolMap.remove(). If other threads add new instances to the pool in the 
 middle of the calls, the new added instances might be dropped. 
 (HTablePool.closeTablePool() also has another problem that calling it by 
 multple threads causes accessing HTable by multiple threads.)
 Moreover, PoolMap is not thread safe for the same reason.
 For example PoolMap.put() calles ConcurrentMap.get() and calles 
 ConcurrentMap.put(). If other threads add a new instance to the concurent map 
 in the middle of the calls, the new instance might be dropped.
 And also implementations of Pool have the same problems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6991) Escape \ in Bytes.toStringBinary() and its counterpart Bytes.toBytesBinary()

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477688#comment-13477688
 ] 

Hadoop QA commented on HBASE-6991:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12549444/HBASE-6991_trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3060//console

This message is automatically generated.

 Escape \ in Bytes.toStringBinary() and its counterpart Bytes.toBytesBinary()
 --

 Key: HBASE-6991
 URL: https://issues.apache.org/jira/browse/HBASE-6991
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-6991_trunk.patch


 Since \ is used to escape non-printable character but not treated as 
 special character in conversion, it could lead to unexpected conversion.
 For example, please consider the following code snippet.
 {code}
 public void testConversion() {
   byte[] original = {
   '\\', 'x', 'A', 'D'
   };
   String stringFromBytes = Bytes.toStringBinary(original);
   byte[] converted = Bytes.toBytesBinary(stringFromBytes);
   System.out.println(Original:  + Arrays.toString(original));
   System.out.println(Converted:  + Arrays.toString(converted));
   System.out.println(Reversible?:  + (Bytes.compareTo(original, converted) 
 == 0));
 }
 Output:
 ---
 Original: [92, 120, 65, 68]
 Converted: [-83]
 Reversible?: false
 {code}
 The \ character needs to be treated as special and must be encoded as a 
 non-printable character (\x5C) to avoid any kind of unambiguity during 
 conversion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6991) Escape \ in Bytes.toStringBinary() and its counterpart Bytes.toBytesBinary()

2012-10-17 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477706#comment-13477706
 ] 

Aditya Kishore commented on HBASE-6991:
---

It should be noted that any previously encoded StringBinary with \ will still 
get correctly decoded by the unchanged toBytesBinary() function. The change to 
toStringBinary() ensures that new encoding of a byte array containing \ is 
100% reversible without any ambiguity.

 Escape \ in Bytes.toStringBinary() and its counterpart Bytes.toBytesBinary()
 --

 Key: HBASE-6991
 URL: https://issues.apache.org/jira/browse/HBASE-6991
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-6991_trunk.patch


 Since \ is used to escape non-printable character but not treated as 
 special character in conversion, it could lead to unexpected conversion.
 For example, please consider the following code snippet.
 {code}
 public void testConversion() {
   byte[] original = {
   '\\', 'x', 'A', 'D'
   };
   String stringFromBytes = Bytes.toStringBinary(original);
   byte[] converted = Bytes.toBytesBinary(stringFromBytes);
   System.out.println(Original:  + Arrays.toString(original));
   System.out.println(Converted:  + Arrays.toString(converted));
   System.out.println(Reversible?:  + (Bytes.compareTo(original, converted) 
 == 0));
 }
 Output:
 ---
 Original: [92, 120, 65, 68]
 Converted: [-83]
 Reversible?: false
 {code}
 The \ character needs to be treated as special and must be encoded as a 
 non-printable character (\x5C) to avoid any kind of ambiguity during 
 conversion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6991) Escape \ in Bytes.toStringBinary() and its counterpart Bytes.toBytesBinary()

2012-10-17 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-6991:
--

Description: 
Since \ is used to escape non-printable character but not treated as special 
character in conversion, it could lead to unexpected conversion.

For example, please consider the following code snippet.

{code}
public void testConversion() {
  byte[] original = {
  '\\', 'x', 'A', 'D'
  };
  String stringFromBytes = Bytes.toStringBinary(original);
  byte[] converted = Bytes.toBytesBinary(stringFromBytes);
  System.out.println(Original:  + Arrays.toString(original));
  System.out.println(Converted:  + Arrays.toString(converted));
  System.out.println(Reversible?:  + (Bytes.compareTo(original, converted) == 
0));
}

Output:
---
Original: [92, 120, 65, 68]
Converted: [-83]
Reversible?: false
{code}

The \ character needs to be treated as special and must be encoded as a 
non-printable character (\x5C) to avoid any kind of ambiguity during 
conversion.

  was:
Since \ is used to escape non-printable character but not treated as special 
character in conversion, it could lead to unexpected conversion.

For example, please consider the following code snippet.

{code}
public void testConversion() {
  byte[] original = {
  '\\', 'x', 'A', 'D'
  };
  String stringFromBytes = Bytes.toStringBinary(original);
  byte[] converted = Bytes.toBytesBinary(stringFromBytes);
  System.out.println(Original:  + Arrays.toString(original));
  System.out.println(Converted:  + Arrays.toString(converted));
  System.out.println(Reversible?:  + (Bytes.compareTo(original, converted) == 
0));
}

Output:
---
Original: [92, 120, 65, 68]
Converted: [-83]
Reversible?: false
{code}

The \ character needs to be treated as special and must be encoded as a 
non-printable character (\x5C) to avoid any kind of unambiguity during 
conversion.


 Escape \ in Bytes.toStringBinary() and its counterpart Bytes.toBytesBinary()
 --

 Key: HBASE-6991
 URL: https://issues.apache.org/jira/browse/HBASE-6991
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-6991_trunk.patch


 Since \ is used to escape non-printable character but not treated as 
 special character in conversion, it could lead to unexpected conversion.
 For example, please consider the following code snippet.
 {code}
 public void testConversion() {
   byte[] original = {
   '\\', 'x', 'A', 'D'
   };
   String stringFromBytes = Bytes.toStringBinary(original);
   byte[] converted = Bytes.toBytesBinary(stringFromBytes);
   System.out.println(Original:  + Arrays.toString(original));
   System.out.println(Converted:  + Arrays.toString(converted));
   System.out.println(Reversible?:  + (Bytes.compareTo(original, converted) 
 == 0));
 }
 Output:
 ---
 Original: [92, 120, 65, 68]
 Converted: [-83]
 Reversible?: false
 {code}
 The \ character needs to be treated as special and must be encoded as a 
 non-printable character (\x5C) to avoid any kind of ambiguity during 
 conversion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread liang xie (JIRA)
liang xie created HBASE-7000:


 Summary: Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7000:
-

Attachment: HBASE-7000.patch

there're two choices here.
1 attached file shows change to some value not the old Integer.MAX_VALUE

2 modify KeyValue.java, remove the following code:
if (vlength  HConstants.MAXIMUM_VALUE_LENGTH) { // FindBugs 
INT_VACUOUS_COMPARISON
  throw new IllegalArgumentException(Value length  + vlength ++
  HConstants.MAXIMUM_VALUE_LENGTH);
}

 Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 --

 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7000.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread liang xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477794#comment-13477794
 ] 

liang xie commented on HBASE-7000:
--

IMHO, the HConstants.MAXIMUM_VALUE_LENGTH maybe changed in future, so the check 
statement should be always be there for safety

 Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 --

 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7000.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread liang xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477795#comment-13477795
 ] 

liang xie commented on HBASE-7000:
--

BTW, the WARING was caused by :
an integer comparison(vlength  Integer.MAX_VALUE) that always returns the same 
value

 Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 --

 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7000.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7000:
-

Status: Patch Available  (was: Open)

 Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 --

 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7000.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class

2012-10-17 Thread liang xie (JIRA)
liang xie created HBASE-7001:


 Summary: Fix the RCN Correctness Warning in MemStoreFlusher class
 Key: HBASE-7001
 URL: https://issues.apache.org/jira/browse/HBASE-7001
 Project: HBase
  Issue Type: Bug
Reporter: liang xie
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7001:
-

Description: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS
shows :

Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details)
In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher
In method 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry)
Value loaded from region
Return value of 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry)
At MemStoreFlusher.java:[line 346]
Redundant null check at MemStoreFlusher.java:[line 363]

 Fix the RCN Correctness Warning in MemStoreFlusher class
 

 Key: HBASE-7001
 URL: https://issues.apache.org/jira/browse/HBASE-7001
 Project: HBase
  Issue Type: Bug
Reporter: liang xie
Priority: Minor

 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS
 shows :
   
 Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details)
 In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher
 In method 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry)
 Value loaded from region
 Return value of 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry)
 At MemStoreFlusher.java:[line 346]
 Redundant null check at MemStoreFlusher.java:[line 363]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7001:
-

Assignee: liang xie
  Status: Patch Available  (was: Open)

 Fix the RCN Correctness Warning in MemStoreFlusher class
 

 Key: HBASE-7001
 URL: https://issues.apache.org/jira/browse/HBASE-7001
 Project: HBase
  Issue Type: Bug
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7001.patch


 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS
 shows :
   
 Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details)
 In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher
 In method 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry)
 Value loaded from region
 Return value of 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry)
 At MemStoreFlusher.java:[line 346]
 Redundant null check at MemStoreFlusher.java:[line 363]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7001:
-

Attachment: HBASE-7001.patch

 Fix the RCN Correctness Warning in MemStoreFlusher class
 

 Key: HBASE-7001
 URL: https://issues.apache.org/jira/browse/HBASE-7001
 Project: HBase
  Issue Type: Bug
Reporter: liang xie
Priority: Minor
 Attachments: HBASE-7001.patch


 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS
 shows :
   
 Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details)
 In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher
 In method 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry)
 Value loaded from region
 Return value of 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry)
 At MemStoreFlusher.java:[line 346]
 Redundant null check at MemStoreFlusher.java:[line 363]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6001) Upgrade slf4j to 1.6.1

2012-10-17 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu reassigned HBASE-6001:
-

Assignee: rajeshbabu  (was: Jimmy Xiang)

 Upgrade slf4j to 1.6.1
 --

 Key: HBASE-6001
 URL: https://issues.apache.org/jira/browse/HBASE-6001
 Project: HBase
  Issue Type: Task
Reporter: Jimmy Xiang
Assignee: rajeshbabu
 Fix For: 0.92.2, 0.94.1, 0.96.0

 Attachments: hbase-6001.patch


 We need to upgrade slf4j to 1.6.1 since other hadoop components use 1.6.1 now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477816#comment-13477816
 ] 

Hadoop QA commented on HBASE-7000:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549488/HBASE-7000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
  org.apache.hadoop.hbase.master.TestSplitLogManager

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3061//console

This message is automatically generated.

 Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 --

 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7000.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477845#comment-13477845
 ] 

Hadoop QA commented on HBASE-7001:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549491/HBASE-7001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3062//console

This message is automatically generated.

 Fix the RCN Correctness Warning in MemStoreFlusher class
 

 Key: HBASE-7001
 URL: https://issues.apache.org/jira/browse/HBASE-7001
 Project: HBase
  Issue Type: Bug
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7001.patch


 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS
 shows :
   
 Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details)
 In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher
 In method 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry)
 Value loaded from region
 Return value of 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry)
 At MemStoreFlusher.java:[line 346]
 Redundant null check at MemStoreFlusher.java:[line 363]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7002) Fix all 4 findbug performance warnings

2012-10-17 Thread liang xie (JIRA)
liang xie created HBASE-7002:


 Summary: Fix all 4 findbug performance warnings
 Key: HBASE-7002
 URL: https://issues.apache.org/jira/browse/HBASE-7002
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Priority: Minor
 Attachments: HBASE-7002.patch



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7002) Fix all 4 findbug performance warnings

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7002:
-

Description: Fix the perf warning from this report : 
https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_PERFORMANCE

 Fix all 4 findbug performance warnings
 --

 Key: HBASE-7002
 URL: https://issues.apache.org/jira/browse/HBASE-7002
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Priority: Minor
 Attachments: HBASE-7002.patch


 Fix the perf warning from this report : 
 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_PERFORMANCE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7002) Fix all 4 findbug performance warnings

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7002:
-

Attachment: HBASE-7002.patch

 Fix all 4 findbug performance warnings
 --

 Key: HBASE-7002
 URL: https://issues.apache.org/jira/browse/HBASE-7002
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Priority: Minor
 Attachments: HBASE-7002.patch


 Fix the perf warning from this report : 
 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_PERFORMANCE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7002) Fix all 4 findbug performance warnings

2012-10-17 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated HBASE-7002:
-

Assignee: liang xie
  Status: Patch Available  (was: Open)

 Fix all 4 findbug performance warnings
 --

 Key: HBASE-7002
 URL: https://issues.apache.org/jira/browse/HBASE-7002
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7002.patch


 Fix the perf warning from this report : 
 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_PERFORMANCE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6001) Upgrade slf4j to 1.6.1

2012-10-17 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-6001:
--

Assignee: Jimmy Xiang  (was: rajeshbabu)

 Upgrade slf4j to 1.6.1
 --

 Key: HBASE-6001
 URL: https://issues.apache.org/jira/browse/HBASE-6001
 Project: HBase
  Issue Type: Task
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.92.2, 0.94.1, 0.96.0

 Attachments: hbase-6001.patch


 We need to upgrade slf4j to 1.6.1 since other hadoop components use 1.6.1 now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6001) Upgrade slf4j to 1.6.1

2012-10-17 Thread rajeshbabu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477851#comment-13477851
 ] 

rajeshbabu commented on HBASE-6001:
---

Sorry,unknowingly assignee changed. assigned back to Jimmy.

 Upgrade slf4j to 1.6.1
 --

 Key: HBASE-6001
 URL: https://issues.apache.org/jira/browse/HBASE-6001
 Project: HBase
  Issue Type: Task
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.92.2, 0.94.1, 0.96.0

 Attachments: hbase-6001.patch


 We need to upgrade slf4j to 1.6.1 since other hadoop components use 1.6.1 now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7002) Fix all 4 findbug performance warnings

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477882#comment-13477882
 ] 

Hadoop QA commented on HBASE-7002:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549501/HBASE-7002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestSplitTransaction

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//console

This message is automatically generated.

 Fix all 4 findbug performance warnings
 --

 Key: HBASE-7002
 URL: https://issues.apache.org/jira/browse/HBASE-7002
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7002.patch


 Fix the perf warning from this report : 
 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_PERFORMANCE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477905#comment-13477905
 ] 

Michael Drzal commented on HBASE-6974:
--

[~lhofhansl] 
- 1024 ms to s conversion fixed.  I think I spent too much time converting 
bytes around, sorry
- I'll add in the memstore flusher metric and post a patch in a bit
- I'll look into EnvironmentEdge.currentTimeMillis
- I understand what you are saying about currentTimeMillis, but let me try to 
restate what you said so that I am sure that we are on the same page:

If I move the call to currentTimeMillis inside the while loop, that means that 
I will have to keep another variable off to the side to keep track of the 
total, plus another one to accomplish the swap.  Doing this, I would call 
currentTimeMillis once for each time through the loop, correct?  If you think 
that is a better way to go, I can do that.



 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-6942:
--

Attachment: HBASE-6942_V6.patch

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942.patch, HBASE-6942_V2.patch, 
 HBASE-6942_V3.patch, HBASE-6942_V4.patch, HBASE-6942_V5.patch, 
 HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477916#comment-13477916
 ] 

Anoop Sam John commented on HBASE-6942:
---

Patch V6 with Scan based approach...
Regarding returning the no# of KVs deleted.
I think this is having meaning only when delete type is VERSION...  In other 
types we don't know exactly how many KVs deleted...
In case of VERSION delete we return the no# versions deleted

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-6942:
--

Attachment: HBASE-6942_DeleteTemplate.patch

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477919#comment-13477919
 ] 

Anoop Sam John commented on HBASE-6942:
---

HBASE-6942_DeleteTemplate  patch with Delete template approach
Here one thing we can not directly implement is the deletion  of N versions.. 
With scan based approach as every thing governed by the scan result it is easy

With both these I feel being a user go with Scan based approach will be easy. 
The mixed kind of delete only will be not possible in that case

Requesting opinion from others

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Status: Open  (was: Patch Available)

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Attachment: HBASE-6974-v2.patch

Patch updated based on comments from Lars

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Status: Patch Available  (was: Open)

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477961#comment-13477961
 ] 

Michael Drzal commented on HBASE-6974:
--

I think I addressed all of your concerns except for the placement of 
currentTimeMillis.  Let me know if I understood you correctly, and if so, I can 
make the switch.

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477978#comment-13477978
 ] 

Jean-Marc Spaggiari commented on HBASE-6942:


Should:
NO_OF_VERSIOS_TO_DELETE = noOfVersiosToDelete
be
NO_OF_VERSIONS_TO_DELETE = noOfVersionsToDelete
?

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13477987#comment-13477987
 ] 

Ted Yu commented on HBASE-6942:
---

{code}
+if (opStatus[i].getOperationStatusCode() != 
OperationStatusCode.SUCCESS) {
+  break;
+}
{code}
Should we continue to check status code for the remaining opStatus ?
{code}
+byte[] versionsDeleted = 
deleteWithLockArr[i].getFirst().getAttribute(
+NO_OF_VERSIOS_TO_DELETE);
{code}
Typo in the constant name above.
{code}
+int noOfVersiosToDelete = 0;
{code}
Typo in variable name above.
{code}
+  return Bytes.hashCode(this.family) + Bytes.hashCode(this.qualifier);
{code}
Can we come up with better hash code ?

Looking at both approaches, I think using delete template gives us flexibility 
and cleaner code.

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478010#comment-13478010
 ] 

Hadoop QA commented on HBASE-6974:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549522/HBASE-6974-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3064//console

This message is automatically generated.

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7002) Fix all 4 findbug performance warnings

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478020#comment-13478020
 ] 

Ted Yu commented on HBASE-7002:
---

Patch looks good.
Performance warnings are gone from 
https://builds.apache.org/job/PreCommit-HBASE-Build/3063//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html

 Fix all 4 findbug performance warnings
 --

 Key: HBASE-7002
 URL: https://issues.apache.org/jira/browse/HBASE-7002
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7002.patch


 Fix the perf warning from this report : 
 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_PERFORMANCE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478021#comment-13478021
 ] 

Anoop Sam John commented on HBASE-6942:
---

@Jean and @Ted
My mistake with typo.. Will correct..

With deleteTemplate one thing which is not possible directly is deleting N 
versions usecase.. Seems Lars is having such usecase.. I am not sure
Also in this user need to be careful making both Scan and Delete object.. Dont 
feel some duplicate work?   

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478028#comment-13478028
 ] 

Anoop Sam John commented on HBASE-6942:
---

bq.Should we continue to check status code for the remaining opStatus ?
I was following the code in HRS.. Also Lars one suggested the same way..  
Getting a SANITY_FAILURE wont happen here Also I am not able to get how we can 
get a FAILURE status also..

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478029#comment-13478029
 ] 

Ted Yu commented on HBASE-7001:
---

Patch looks good.

 Fix the RCN Correctness Warning in MemStoreFlusher class
 

 Key: HBASE-7001
 URL: https://issues.apache.org/jira/browse/HBASE-7001
 Project: HBase
  Issue Type: Bug
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7001.patch


 https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS
 shows :
   
 Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details)
 In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher
 In method 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry)
 Value loaded from region
 Return value of 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry)
 At MemStoreFlusher.java:[line 346]
 Redundant null check at MemStoreFlusher.java:[line 363]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478038#comment-13478038
 ] 

Ted Yu commented on HBASE-6942:
---

bq. was following the code in HRS
Are you referring to the following code in doBatchOp() ?
{code}
  case SANITY_CHECK_FAILURE:
result = ResponseConverter.buildActionResult(
new FailedSanityCheckException(codes[i].getExceptionMsg()));
builder.setResult(i, result);
break;
{code}
The above is within switch statement nested inside a for loop.

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6979) recovered.edits file should not break distributed log splitting

2012-10-17 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-6979:
---

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Stack for the review.  Integrated into trunk.

 recovered.edits file should not break distributed log splitting
 ---

 Key: HBASE-6979
 URL: https://issues.apache.org/jira/browse/HBASE-6979
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: trunk-6979.patch


 Distributed log splitting fails in creating the recovered.edits folder during 
 upgrade because there is a file called recovered.edits there.
 Instead of checking if the patch exists, we need to check if it exists and is 
 a path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478060#comment-13478060
 ] 

Anoop Sam John commented on HBASE-6942:
---

This one Ted
HRegionServer#put(final byte[] regionName, final ListPut puts)
{code}
OperationStatus codes[] = region.batchMutate(putsWithLocks);
  for (i = 0; i  codes.length; i++) {
if (codes[i].getOperationStatusCode() != OperationStatusCode.SUCCESS) {
  return i;
}
  }
{code}

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6651) Thread safety of HTablePool is doubtful

2012-10-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6651:
--

Status: Patch Available  (was: Open)

 Thread safety of HTablePool is doubtful
 ---

 Key: HBASE-6651
 URL: https://issues.apache.org/jira/browse/HBASE-6651
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.1
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-6651.patch, HBASE-6651-V2.patch, sample.zip, 
 sample.zip, sharedmap_for_hbaseclient.zip


 There are some operations in HTablePool to access to PoolMap in multiple 
 times without any explict synchronization. 
 For example HTablePool.closeTablePool() calles PoolMap.values(), and calles 
 PoolMap.remove(). If other threads add new instances to the pool in the 
 middle of the calls, the new added instances might be dropped. 
 (HTablePool.closeTablePool() also has another problem that calling it by 
 multple threads causes accessing HTable by multiple threads.)
 Moreover, PoolMap is not thread safe for the same reason.
 For example PoolMap.put() calles ConcurrentMap.get() and calles 
 ConcurrentMap.put(). If other threads add a new instance to the concurent map 
 in the middle of the calls, the new instance might be dropped.
 And also implementations of Pool have the same problems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478068#comment-13478068
 ] 

Ted Yu commented on HBASE-6942:
---

{code}
  public int put(final byte[] regionName, final ListPut puts)
{code}
The above method is only in 0.94 code base. It has no javadoc. But I interpret 
the meaning of its return value to be the first index of unsuccessful Put.

 Endpoint implementation for bulk delete rows
 

 Key: HBASE-6942
 URL: https://issues.apache.org/jira/browse/HBASE-6942
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, Performance
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, 
 HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, 
 HBASE-6942_V5.patch, HBASE-6942_V6.patch


 We can provide an end point implementation for doing a bulk deletion of 
 rows(based on a scan) at the server side. This can reduce the time taken for 
 such an operation as right now it need to do a scan to client and issue 
 delete(s) using rowkeys.
 Query like  delete from table1 where...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6651) Thread safety of HTablePool is doubtful

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478071#comment-13478071
 ] 

Ted Yu commented on HBASE-6651:
---

bq. SharedMap.returnObject() returns false if the pool is already full, or 
invalidateObject() or clear() method has been called explicitly in other place. 
Can you explain why calling clear() followed by returnObject() would result in 
return value of false ? There is space in SharedMap at this moment, right ?

I will put my other comments on the reviewboard.

Hadoop QA is running your patch: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065/parameters/

 Thread safety of HTablePool is doubtful
 ---

 Key: HBASE-6651
 URL: https://issues.apache.org/jira/browse/HBASE-6651
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.1
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-6651.patch, HBASE-6651-V2.patch, sample.zip, 
 sample.zip, sharedmap_for_hbaseclient.zip


 There are some operations in HTablePool to access to PoolMap in multiple 
 times without any explict synchronization. 
 For example HTablePool.closeTablePool() calles PoolMap.values(), and calles 
 PoolMap.remove(). If other threads add new instances to the pool in the 
 middle of the calls, the new added instances might be dropped. 
 (HTablePool.closeTablePool() also has another problem that calling it by 
 multple threads causes accessing HTable by multiple threads.)
 Moreover, PoolMap is not thread safe for the same reason.
 For example PoolMap.put() calles ConcurrentMap.get() and calles 
 ConcurrentMap.put(). If other threads add a new instance to the concurent map 
 in the middle of the calls, the new instance might be dropped.
 And also implementations of Pool have the same problems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6032) Port HFileBlockIndex improvement from HBASE-5987

2012-10-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478077#comment-13478077
 ] 

Lars Hofhansl commented on HBASE-6032:
--

Awesome. That was quick. Thanks Stack!
I'll test out a bit today and then commit if all is good.


 Port HFileBlockIndex improvement from HBASE-5987
 

 Key: HBASE-6032
 URL: https://issues.apache.org/jira/browse/HBASE-6032
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 6032.094.txt, 6032-ports-5987.txt, 
 6032-ports-5987-v2.txt, 6032v3.txt


 Excerpt from HBASE-5987:
 First, we propose to lookahead for one more block index so that the 
 HFileScanner would know the start key value of next data block. So if the 
 target key value for the scan(reSeekTo) is smaller than that start kv of 
 next data block, it means the target key value has a very high possibility in 
 the current data block (if not in current data block, then the start kv of 
 next data block should be returned. +Indexing on the start key has some 
 defects here+) and it shall NOT query the HFileBlockIndex in this case. On 
 the contrary, if the target key value is bigger, then it shall query the 
 HFileBlockIndex. This improvement shall help to reduce the hotness of 
 HFileBlockIndex and avoid some unnecessary IdLock Contention or Index Block 
 Cache lookup.
 This JIRA is to port the fix to HBase trunk, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7003) Move remaining examples into hbase-examples

2012-10-17 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-7003:
---

 Summary: Move remaining examples into hbase-examples
 Key: HBASE-7003
 URL: https://issues.apache.org/jira/browse/HBASE-7003
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin


There's still thrift2 directory under non-built examples; there are also some 
examples noted in the original jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7003) Move remaining examples into hbase-examples

2012-10-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7003:


Labels: noob  (was: )

 Move remaining examples into hbase-examples
 ---

 Key: HBASE-7003
 URL: https://issues.apache.org/jira/browse/HBASE-7003
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
  Labels: noob

 There's still thrift2 directory under non-built examples; there are also some 
 examples noted in the original jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7003) Move remaining examples into hbase-examples

2012-10-17 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7003:


Tags:   (was: noob)

 Move remaining examples into hbase-examples
 ---

 Key: HBASE-7003
 URL: https://issues.apache.org/jira/browse/HBASE-7003
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
  Labels: noob

 There's still thrift2 directory under non-built examples; there are also some 
 examples noted in the original jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7004) Thrift examples for Ruby/Perl/Python/PHP are outdated and partially broken

2012-10-17 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-7004:
---

 Summary: Thrift examples for Ruby/Perl/Python/PHP are outdated and 
partially broken
 Key: HBASE-7004
 URL: https://issues.apache.org/jira/browse/HBASE-7004
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Priority: Minor


Long time ago, Java and C++ examples for Thrift were fixed and changed to have 
commandline parameters instead of hardcoding localhost:9090.
All the examples run (as in HBASE-6793 at least), but most of them bail when 
they expect an operation to fail and it succeeds.

Others were not updated with that. I added commandline args for Ruby one in 
HBASE-6793 before deciding to spin off this Jira to make all the examples work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478089#comment-13478089
 ] 

Lars Hofhansl commented on HBASE-6974:
--

What I meant was something like this:
{code}
 boolean blocked = false;
+long startTime = 0;
 while (this.memstoreSize.get()  this.blockingMemStoreSize) {
   requestFlush();
   if (!blocked) {
+startTime = EnvironmentEdgeManager.currentTimeMillis();
{code}

That way we only call currentTimeMillis when we're actually blocking.
I think in the MemstoreFlusher we should have the same logic with the blocked 
flag, otherwise we'd get inundated with log messages that we need to flush.


 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478093#comment-13478093
 ] 

Michael Drzal commented on HBASE-6974:
--

Sure, that works as long as you are ok with missing any time used by the 
initial call to requestFlush.

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6965:
--

Attachment: (was: HBASE-6965.patch)

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6965:
--

Attachment: HBASE-6965.patch

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6965:
--

Status: Patch Available  (was: Open)

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6965:
--

Status: Open  (was: Patch Available)

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6651) Thread safety of HTablePool is doubtful

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478198#comment-13478198
 ] 

Hadoop QA commented on HBASE-6651:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549457/HBASE-6651-V2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
83 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3065//console

This message is automatically generated.

 Thread safety of HTablePool is doubtful
 ---

 Key: HBASE-6651
 URL: https://issues.apache.org/jira/browse/HBASE-6651
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.1
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: HBASE-6651.patch, HBASE-6651-V2.patch, sample.zip, 
 sample.zip, sharedmap_for_hbaseclient.zip


 There are some operations in HTablePool to access to PoolMap in multiple 
 times without any explict synchronization. 
 For example HTablePool.closeTablePool() calles PoolMap.values(), and calles 
 PoolMap.remove(). If other threads add new instances to the pool in the 
 middle of the calls, the new added instances might be dropped. 
 (HTablePool.closeTablePool() also has another problem that calling it by 
 multple threads causes accessing HTable by multiple threads.)
 Moreover, PoolMap is not thread safe for the same reason.
 For example PoolMap.put() calles ConcurrentMap.get() and calles 
 ConcurrentMap.put(). If other threads add a new instance to the concurent map 
 in the middle of the calls, the new instance might be dropped.
 And also implementations of Pool have the same problems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6980) Parallel Flushing Of Memstores

2012-10-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478212#comment-13478212
 ] 

Todd Lipcon commented on HBASE-6980:


If I remember correctly, there is a reason for the flush marker: it ensures 
that the RS hasn't been fenced on HDFS -- i.e that it hasn't lost its 
connection to ZK and already had its log splitting started.

The reason this is important is that, otherwise, it could move on to delete old 
log segments, which would potentially break the log split process.

It may be that the locking can be more lax, though.

 Parallel Flushing Of Memstores
 --

 Key: HBASE-6980
 URL: https://issues.apache.org/jira/browse/HBASE-6980
 Project: HBase
  Issue Type: New Feature
Reporter: Kannan Muthukkaruppan
Assignee: Kannan Muthukkaruppan

 For write dominated workloads, single threaded memstore flushing is an 
 unnecessary bottleneck. With a single flusher thread, we are basically not 
 setup to take advantage of the aggregate throughput that multi-disk nodes 
 provide.
 * For puts with WAL enabled, the bottleneck is more likely the single WAL 
 per region server. So this particular fix may not buy as much unless we 
 unlock that bottleneck with multiple commit logs per region server. (Topic 
 for a separate JIRA-- HBASE-6981).
 * But for puts with WAL disabled (e.g., when using HBASE-5783 style fast bulk 
 imports), we should be able to support much better ingest rates with parallel 
 flushing of memstores.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6979) recovered.edits file should not break distributed log splitting

2012-10-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478243#comment-13478243
 ] 

Hudson commented on HBASE-6979:
---

Integrated in HBase-TRUNK #3452 (See 
[https://builds.apache.org/job/HBase-TRUNK/3452/])
HBASE-6979 recovered.edits file should not break distributed log splitting 
(Revision 1399352)

 Result = FAILURE
jxiang : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java


 recovered.edits file should not break distributed log splitting
 ---

 Key: HBASE-6979
 URL: https://issues.apache.org/jira/browse/HBASE-6979
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: trunk-6979.patch


 Distributed log splitting fails in creating the recovered.edits folder during 
 upgrade because there is a file called recovered.edits there.
 Instead of checking if the patch exists, we need to check if it exists and is 
 a path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4435) Add Group By functionality using Coprocessors

2012-10-17 Thread Aaron Tokhy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Tokhy updated HBASE-4435:
---

Attachment: HBASE-4435-v2.patch

I have a newer version of the patch:

Improvements:

1) Added implementations of ColumnInterpreter classes so both AggregationClient 
and GroupByClient could perform aggregations on Long, Short, Integer, Double, 
Float, Character (or unsigned short), and BigDecimal types.

2) The GroupByStatsValues class is a Java generic that constrains on Java types 
that only implement the 'Number' interface.  This way the generic is 
constrained for those types at compile time.

3) Previously, a HashMap was returned at the end of each RPC call.  HashMap 
uses java.io.Serializable, which is relatively heavyweight.  Switched to using 
the Hadoop Writable interface so all objects passed between clients and 
regionservers use the Hadoop Writable interface.

4) Fixed some validateParameter bugs in the previous patch which would allow 
selections of column qualifiers not found in the Scan object to go through.

Caveats:

1) This works well if your resultset fits into memory as group by values are 
aggregated into a HashMap on the client.  Therefore, if the cardinality of the 
aggregation table is too high, you may get an OOME.

2) All aggregations are calculated by the 'GroupByStatsValues' container.  
Perhaps at object construction, a 'statsvalues' can be constructed to only 
perform some of the aggregations instead of all of them at the same time.  
However this operation is Scan (IO) bound, so improvements would be minimal 
here.

3) Like all coprocessors that accept a Scan object, if the aggregation is 
performing a full table scan, this will run on all regionservers.  Each region 
level coprocessor is loaded into an IPC handler (default of 10) on the 
regionserver.  If the regionserver has more regions than IPC handlers, only 10 
group by operations will run at a time.

Depending on your table schema, region size and blockCacheHitRatio, your 
mileage may vary.  If data can be preaggregated for a group by operation, this 
patch would be handy for aggregating a single column value projection of the 
original full table.  A column oriented representation of the original table 
would work well in this case, or possibly a client/coprocessor managed 
secondary index.

The patch applies cleanly onto HBase 0.92.1 and HBase 0.94.1.

 Add Group By functionality using Coprocessors
 -

 Key: HBASE-4435
 URL: https://issues.apache.org/jira/browse/HBASE-4435
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Reporter: Nichole Treadway
Priority: Minor
 Attachments: HBase-4435.patch, HBASE-4435-v2.patch


 Adds in a Group By -like functionality to HBase, using the Coprocessor 
 framework. 
 It provides the ability to group the result set on one or more columns 
 (groupBy families). It computes statistics (max, min, sum, count, sum of 
 squares, number missing) for a second column, called the stats column. 
 To use, I've provided two implementations.
 1. In the first, you specify a single group-by column and a stats field:
   statsMap = gbc.getStats(tableName, scan, groupByFamily, 
 groupByQualifier, statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The result is a map with the Group By column value (as a String) to a 
 GroupByStatsValues object. The GroupByStatsValues object has max,min,sum etc. 
 of the stats column for that group.
 2. The second implementation allows you to specify a list of group-by columns 
 and a stats field. The List of group-by columns is expected to contain lists 
 of {column family, qualifier} pairs. 
   statsMap = gbc.getStats(tableName, scan, listOfGroupByColumns, 
 statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The GroupByStatsValues code is adapted from the Solr Stats component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5525) Truncate and preserve region boundaries option

2012-10-17 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478248#comment-13478248
 ] 

Gregory Chanan commented on HBASE-5525:
---

Looks pretty good.

Maybe truncate_preservebounds instead of truncate_preserve (so it is clear what 
you are preserving)?  Or make preserve a parameter to truncate, default false?  
(the later would be my personal preference).

Also, you could avoid duplicating some code if you had truncate and 
truncate_preserve call a common function (or just have preserve be a parameter 
to truncate, as above).

 Truncate and preserve region boundaries option
 --

 Key: HBASE-5525
 URL: https://issues.apache.org/jira/browse/HBASE-5525
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Assignee: Kevin Odell
  Labels: newbie, noob
 Fix For: 0.96.0

 Attachments: HBASE-5525.patch


 A tool that would be useful for testing (and maybe in prod too) would be a 
 truncate option to keep the current region boundaries. Right now what you 
 have to do is completely kill the table and recreate it with the correct 
 regions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6032) Port HFileBlockIndex improvement from HBASE-5987

2012-10-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478252#comment-13478252
 ] 

Lars Hofhansl commented on HBASE-6032:
--

I ran the relevant tests. And also used with some other test cases I have. (The 
set was with a local HDFS, so I didn't observe any performance benefits)


 Port HFileBlockIndex improvement from HBASE-5987
 

 Key: HBASE-6032
 URL: https://issues.apache.org/jira/browse/HBASE-6032
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 6032.094.txt, 6032-ports-5987.txt, 
 6032-ports-5987-v2.txt, 6032v3.txt


 Excerpt from HBASE-5987:
 First, we propose to lookahead for one more block index so that the 
 HFileScanner would know the start key value of next data block. So if the 
 target key value for the scan(reSeekTo) is smaller than that start kv of 
 next data block, it means the target key value has a very high possibility in 
 the current data block (if not in current data block, then the start kv of 
 next data block should be returned. +Indexing on the start key has some 
 defects here+) and it shall NOT query the HFileBlockIndex in this case. On 
 the contrary, if the target key value is bigger, then it shall query the 
 HFileBlockIndex. This improvement shall help to reduce the hotness of 
 HFileBlockIndex and avoid some unnecessary IdLock Contention or Index Block 
 Cache lookup.
 This JIRA is to port the fix to HBase trunk, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4435) Add Group By functionality using Coprocessors

2012-10-17 Thread Aaron Tokhy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Tokhy updated HBASE-4435:
---

Labels: by coprocessors group hbase  (was: )

 Add Group By functionality using Coprocessors
 -

 Key: HBASE-4435
 URL: https://issues.apache.org/jira/browse/HBASE-4435
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Reporter: Nichole Treadway
Priority: Minor
  Labels: by, coprocessors, group, hbase
 Attachments: HBase-4435.patch, HBASE-4435-v2.patch


 Adds in a Group By -like functionality to HBase, using the Coprocessor 
 framework. 
 It provides the ability to group the result set on one or more columns 
 (groupBy families). It computes statistics (max, min, sum, count, sum of 
 squares, number missing) for a second column, called the stats column. 
 To use, I've provided two implementations.
 1. In the first, you specify a single group-by column and a stats field:
   statsMap = gbc.getStats(tableName, scan, groupByFamily, 
 groupByQualifier, statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The result is a map with the Group By column value (as a String) to a 
 GroupByStatsValues object. The GroupByStatsValues object has max,min,sum etc. 
 of the stats column for that group.
 2. The second implementation allows you to specify a list of group-by columns 
 and a stats field. The List of group-by columns is expected to contain lists 
 of {column family, qualifier} pairs. 
   statsMap = gbc.getStats(tableName, scan, listOfGroupByColumns, 
 statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The GroupByStatsValues code is adapted from the Solr Stats component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Attachment: HBASE-6974-v3.patch

updated moving currentTimeMillis into while loop

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478260#comment-13478260
 ] 

Michael Drzal commented on HBASE-6974:
--

Let me know if that works.

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Status: Open  (was: Patch Available)

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Status: Patch Available  (was: Open)

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478264#comment-13478264
 ] 

Hadoop QA commented on HBASE-6965:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549540/HBASE-6965.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 9 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3066//console

This message is automatically generated.

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5525) Truncate and preserve region boundaries option

2012-10-17 Thread Ricky Saltzer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478283#comment-13478283
 ] 

Ricky Saltzer commented on HBASE-5525:
--

Would it make more sense to have one truncate command, and have an optional 
parameter which preserves the truncate? Rather than having a separate command 
for truncating and truncating while preserving region keys? 

 Truncate and preserve region boundaries option
 --

 Key: HBASE-5525
 URL: https://issues.apache.org/jira/browse/HBASE-5525
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Assignee: Kevin Odell
  Labels: newbie, noob
 Fix For: 0.96.0

 Attachments: HBASE-5525.patch


 A tool that would be useful for testing (and maybe in prod too) would be a 
 truncate option to keep the current region boundaries. Right now what you 
 have to do is completely kill the table and recreate it with the correct 
 regions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5525) Truncate and preserve region boundaries option

2012-10-17 Thread Ricky Saltzer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478289#comment-13478289
 ] 

Ricky Saltzer commented on HBASE-5525:
--

+1 Greg, something like would be great, also updating the docs to reflect what 

{noformat}
truncate 'users_table', {PRESERVE = true}
{noformat}

 Truncate and preserve region boundaries option
 --

 Key: HBASE-5525
 URL: https://issues.apache.org/jira/browse/HBASE-5525
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Assignee: Kevin Odell
  Labels: newbie, noob
 Fix For: 0.96.0

 Attachments: HBASE-5525.patch


 A tool that would be useful for testing (and maybe in prod too) would be a 
 truncate option to keep the current region boundaries. Right now what you 
 have to do is completely kill the table and recreate it with the correct 
 regions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5525) Truncate and preserve region boundaries option

2012-10-17 Thread Kevin Odell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Odell updated HBASE-5525:
---

Status: Open  (was: Patch Available)

 Truncate and preserve region boundaries option
 --

 Key: HBASE-5525
 URL: https://issues.apache.org/jira/browse/HBASE-5525
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.96.0
Reporter: Jean-Daniel Cryans
Assignee: Kevin Odell
  Labels: newbie, noob
 Fix For: 0.96.0

 Attachments: HBASE-5525.patch


 A tool that would be useful for testing (and maybe in prod too) would be a 
 truncate option to keep the current region boundaries. Right now what you 
 have to do is completely kill the table and recreate it with the correct 
 regions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478305#comment-13478305
 ] 

Ted Yu commented on HBASE-6965:
---

Latest patch looks good.

@N:
Do you want to take a look ?

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0

2012-10-17 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478309#comment-13478309
 ] 

Roman Shaposhnik commented on HBASE-6929:
-

Aren't you guys already publishing secure/unsecure bits? Adding a dedicated 
version would make it 4 then.

 Publish Hbase 0.94 artifacts build against hadoop-2.0
 -

 Key: HBASE-6929
 URL: https://issues.apache.org/jira/browse/HBASE-6929
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 0.94.2
Reporter: Enis Soztutar
 Attachments: 6929.txt, hbase-6929_v2.patch


 Downstream projects (flume, hive, pig, etc) depends on hbase, but since the 
 hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot 
 depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should 
 also push hbase jars build with hadoop2.0 profile into maven, possibly with 
 version string like 0.94.2-hadoop2.0. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4435) Add Group By functionality using Coprocessors

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478311#comment-13478311
 ] 

Ted Yu commented on HBASE-4435:
---

Thanks for the patch.
Can you provide trunk patch following the example of:
HBASE-6785 'Convert AggregateProtocol to protobuf defined coprocessor service'

Will provide comments soon.

For patch of this size, review board (https://reviews.apache.org) would help 
reviewers.

 Add Group By functionality using Coprocessors
 -

 Key: HBASE-4435
 URL: https://issues.apache.org/jira/browse/HBASE-4435
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Reporter: Nichole Treadway
Priority: Minor
  Labels: by, coprocessors, group, hbase
 Attachments: HBase-4435.patch, HBASE-4435-v2.patch


 Adds in a Group By -like functionality to HBase, using the Coprocessor 
 framework. 
 It provides the ability to group the result set on one or more columns 
 (groupBy families). It computes statistics (max, min, sum, count, sum of 
 squares, number missing) for a second column, called the stats column. 
 To use, I've provided two implementations.
 1. In the first, you specify a single group-by column and a stats field:
   statsMap = gbc.getStats(tableName, scan, groupByFamily, 
 groupByQualifier, statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The result is a map with the Group By column value (as a String) to a 
 GroupByStatsValues object. The GroupByStatsValues object has max,min,sum etc. 
 of the stats column for that group.
 2. The second implementation allows you to specify a list of group-by columns 
 and a stats field. The List of group-by columns is expected to contain lists 
 of {column family, qualifier} pairs. 
   statsMap = gbc.getStats(tableName, scan, listOfGroupByColumns, 
 statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The GroupByStatsValues code is adapted from the Solr Stats component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Kumar Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kumar Ravi updated HBASE-6965:
--

Attachment: OSMXBean_HBASE-6965-0.94.patch

Submitting patch for the 0.94.x release. 
Differences from trunk patch:

1. Path names for OSMXBean.java are different.
2. No changes were required for pom.xml

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch, OSMXBean_HBASE-6965-0.94.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Jake Farrell (JIRA)
Jake Farrell created HBASE-7005:
---

 Summary: Upgrade Thrift lib to 0.9.0
 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478320#comment-13478320
 ] 

Hadoop QA commented on HBASE-6974:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549554/HBASE-6974-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3067//console

This message is automatically generated.

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Jake Farrell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jake Farrell updated HBASE-7005:


Attachment: Hbase-7005.patch

Patch to update Thrift lib from 0.8.0 to latest release 0.9.0

 Upgrade Thrift lib to 0.9.0
 ---

 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor
 Attachments: Hbase-7005.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0

2012-10-17 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478334#comment-13478334
 ] 

Enis Soztutar commented on HBASE-6929:
--

Since classifiers did not work, I could not find another way other than 
changing the version number for publishing the jars. I think we haven't 
published secure jars yet. 

 Publish Hbase 0.94 artifacts build against hadoop-2.0
 -

 Key: HBASE-6929
 URL: https://issues.apache.org/jira/browse/HBASE-6929
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 0.94.2
Reporter: Enis Soztutar
 Attachments: 6929.txt, hbase-6929_v2.patch


 Downstream projects (flume, hive, pig, etc) depends on hbase, but since the 
 hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot 
 depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should 
 also push hbase jars build with hadoop2.0 profile into maven, possibly with 
 version string like 0.94.2-hadoop2.0. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478339#comment-13478339
 ] 

Todd Lipcon commented on HBASE-7005:


Makes sense. Let's do this in trunk only, since the pom dependency change can 
hurt people's compatibility in earlier versions?

 Upgrade Thrift lib to 0.9.0
 ---

 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor
 Attachments: Hbase-7005.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4435) Add Group By functionality using Coprocessors

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478351#comment-13478351
 ] 

Ted Yu commented on HBASE-4435:
---

I didn't find any test in the patch. It would be difficult for a feature to be 
accepted without new tests.
Should GroupByStatsValues be named GroupByStats (since stats imply some values) 
?

{code}
+ * Copyright 2012 The Apache Software Foundation
{code}
The above line is no longer needed in license header.

BigDecimalColumnInterpreter is covered in HBASE-6669. To make the workload 
reasonable for this JIRA, you can exclude it from patch.
{code}
+public class CharacterColumnInterpreter implements 
ColumnInterpreterCharacter, Character {
{code}
Add annotation for audience and stability for public classes.

In GroupByClient.java, the following import can be removed:
{code}
+import com.sun.istack.logging.Logger;
{code}
{code}
+MapText, GroupByStatsValuesT, S getStats(
+  final byte[] tableName, final Scan scan, 
+  final Listbyte [][] groupByTuples, final byte[][] statsTuple, 
{code}
The @param for the above method doesn't match actual parameters - probably you 
changed API in later iteration.
{code}
+class RowNumCallback implements
{code}
The above class can be made private.
I think we should find a better name for the above class - it does aggregation.
{code}
+long bt = System.currentTimeMillis();
{code}
Please use EnvironmentEdge instead.
{code}
+table.close();
{code}
Please enclose the above in finally clause.

 Add Group By functionality using Coprocessors
 -

 Key: HBASE-4435
 URL: https://issues.apache.org/jira/browse/HBASE-4435
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Reporter: Nichole Treadway
Priority: Minor
  Labels: by, coprocessors, group, hbase
 Attachments: HBase-4435.patch, HBASE-4435-v2.patch


 Adds in a Group By -like functionality to HBase, using the Coprocessor 
 framework. 
 It provides the ability to group the result set on one or more columns 
 (groupBy families). It computes statistics (max, min, sum, count, sum of 
 squares, number missing) for a second column, called the stats column. 
 To use, I've provided two implementations.
 1. In the first, you specify a single group-by column and a stats field:
   statsMap = gbc.getStats(tableName, scan, groupByFamily, 
 groupByQualifier, statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The result is a map with the Group By column value (as a String) to a 
 GroupByStatsValues object. The GroupByStatsValues object has max,min,sum etc. 
 of the stats column for that group.
 2. The second implementation allows you to specify a list of group-by columns 
 and a stats field. The List of group-by columns is expected to contain lists 
 of {column family, qualifier} pairs. 
   statsMap = gbc.getStats(tableName, scan, listOfGroupByColumns, 
 statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The GroupByStatsValues code is adapted from the Solr Stats component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0

2012-10-17 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478354#comment-13478354
 ] 

Roman Shaposhnik commented on HBASE-6929:
-

So, isn't the lack of secure jars as much a problem as the lack of the ones 
compiled against different versions of Hadoop? I'm just trying to understand 
the scope of different 'twists' of HBase that one would have to know/care about.

 Publish Hbase 0.94 artifacts build against hadoop-2.0
 -

 Key: HBASE-6929
 URL: https://issues.apache.org/jira/browse/HBASE-6929
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 0.94.2
Reporter: Enis Soztutar
 Attachments: 6929.txt, hbase-6929_v2.patch


 Downstream projects (flume, hive, pig, etc) depends on hbase, but since the 
 hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot 
 depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should 
 also push hbase jars build with hadoop2.0 profile into maven, possibly with 
 version string like 0.94.2-hadoop2.0. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7005:
--

Status: Patch Available  (was: Open)

 Upgrade Thrift lib to 0.9.0
 ---

 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor
 Attachments: Hbase-7005.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6965) Generic MXBean Utility class to support all JDK vendors

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478369#comment-13478369
 ] 

Hadoop QA commented on HBASE-6965:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12549565/OSMXBean_HBASE-6965-0.94.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3068//console

This message is automatically generated.

 Generic MXBean Utility class to support all JDK vendors
 ---

 Key: HBASE-6965
 URL: https://issues.apache.org/jira/browse/HBASE-6965
 Project: HBase
  Issue Type: Improvement
  Components: build
Affects Versions: 0.94.1
Reporter: Kumar Ravi
Assignee: Kumar Ravi
  Labels: patch
 Fix For: 0.94.3

 Attachments: HBASE-6965.patch, OSMXBean_HBASE-6965-0.94.patch


 This issue is related to JIRA 
 https://issues.apache.org/jira/browse/HBASE-6945. This issue is opened to 
 propose the use of a newly created generic 
 org.apache.hadoop.hbase.util.OSMXBean class that can be used by other 
 classes. JIRA HBASE-6945 contains a patch for the class 
 org.apache.hadoop.hbase.ResourceChecker that uses OSMXBean. With the 
 inclusion of this new class, HBase can be built and become functional with 
 JDKs and JREs other than what is provided by Oracle.
  This class uses reflection to determine the JVM vendor (Sun, IBM) and the 
 platform (Linux or Windows), and contains other methods that return the OS 
 properties - 1. Number of Open File descriptors;  2. Maximum number of File 
 Descriptors.
  This class compiles without any problems with IBM JDK 7, OpenJDK 6 as well 
 as Oracle JDK 6. Junit tests (runDevTests category) completed without any 
 failures or errors when tested on all the three JDKs.The builds and tests 
 were attempted on branch hbase-0.94 Revision 1396305.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-6948) shell create table script cannot handle split key which is expressed in raw bytes

2012-10-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-6948:
-

Assignee: Tianying Chang

 shell create table script cannot handle split key which is expressed in raw 
 bytes
 -

 Key: HBASE-6948
 URL: https://issues.apache.org/jira/browse/HBASE-6948
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.2
Reporter: Ted Yu
Assignee: Tianying Chang
 Fix For: 0.96.0

 Attachments: HBASE-6948.patch, HBASE-6948-trunk.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6948) shell create table script cannot handle split key which is expressed in raw bytes

2012-10-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-6948:
--

Fix Version/s: 0.96.0

Intergrated to trunk.

Thanks for the review, Stack and Ram.

 shell create table script cannot handle split key which is expressed in raw 
 bytes
 -

 Key: HBASE-6948
 URL: https://issues.apache.org/jira/browse/HBASE-6948
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.2
Reporter: Ted Yu
Assignee: Tianying Chang
 Fix For: 0.96.0

 Attachments: HBASE-6948.patch, HBASE-6948-trunk.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7000) Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class

2012-10-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478391#comment-13478391
 ] 

Ted Yu commented on HBASE-7000:
---

patch looks good.
nit: insert space before 1:
{code}
+  public static final int MAXIMUM_VALUE_LENGTH = Integer.MAX_VALUE -1;
{code}

 Fix the INT_VACUOUS_COMPARISON WARNING in KeyValue class
 --

 Key: HBASE-7000
 URL: https://issues.apache.org/jira/browse/HBASE-7000
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: liang xie
Assignee: liang xie
Priority: Minor
 Attachments: HBASE-7000.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6793) Make hbase-examples module

2012-10-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478396#comment-13478396
 ] 

Sergey Shelukhin commented on HBASE-6793:
-

I'll squash the changes and add final patch here when done with that review.

 Make hbase-examples module
 --

 Key: HBASE-6793
 URL: https://issues.apache.org/jira/browse/HBASE-6793
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Enis Soztutar
Assignee: Sergey Shelukhin
  Labels: noob
 Attachments: HBASE-6793.patch


 There are some examples under /examples/, which are not compiled as a part of 
 the build. We can move them to an hbase-examples module.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6793) Make hbase-examples module

2012-10-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478394#comment-13478394
 ] 

Sergey Shelukhin commented on HBASE-6793:
-

Updated the review at https://reviews.apache.org/r/7626/, created the jiras

 Make hbase-examples module
 --

 Key: HBASE-6793
 URL: https://issues.apache.org/jira/browse/HBASE-6793
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Enis Soztutar
Assignee: Sergey Shelukhin
  Labels: noob
 Attachments: HBASE-6793.patch


 There are some examples under /examples/, which are not compiled as a part of 
 the build. We can move them to an hbase-examples module.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4435) Add Group By functionality using Coprocessors

2012-10-17 Thread Aaron Tokhy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478408#comment-13478408
 ] 

Aaron Tokhy commented on HBASE-4435:


Thanks for the quick review, I'll soon update JIRA with a new patch, based off 
of SVN trunk, though not at the moment.  Also I'll have to clean up some of the 
code, thanks for the quick feedback!

I may also change a few other things, such as using HashedBytes instead of Text 
to be able to perform roll-ups of types other than UTF-8 strings.

 Add Group By functionality using Coprocessors
 -

 Key: HBASE-4435
 URL: https://issues.apache.org/jira/browse/HBASE-4435
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors
Reporter: Nichole Treadway
Priority: Minor
  Labels: by, coprocessors, group, hbase
 Attachments: HBase-4435.patch, HBASE-4435-v2.patch


 Adds in a Group By -like functionality to HBase, using the Coprocessor 
 framework. 
 It provides the ability to group the result set on one or more columns 
 (groupBy families). It computes statistics (max, min, sum, count, sum of 
 squares, number missing) for a second column, called the stats column. 
 To use, I've provided two implementations.
 1. In the first, you specify a single group-by column and a stats field:
   statsMap = gbc.getStats(tableName, scan, groupByFamily, 
 groupByQualifier, statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The result is a map with the Group By column value (as a String) to a 
 GroupByStatsValues object. The GroupByStatsValues object has max,min,sum etc. 
 of the stats column for that group.
 2. The second implementation allows you to specify a list of group-by columns 
 and a stats field. The List of group-by columns is expected to contain lists 
 of {column family, qualifier} pairs. 
   statsMap = gbc.getStats(tableName, scan, listOfGroupByColumns, 
 statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The GroupByStatsValues code is adapted from the Solr Stats component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0

2012-10-17 Thread Jarek Jarcec Cecho (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478415#comment-13478415
 ] 

Jarek Jarcec Cecho commented on HBASE-6929:
---

Another possible solution would be to provide artifact hbase-test that would 
have classifier either hadoop1 or hadoop2 based on for which version it was 
compiled. This solution would be probably step back as for example hadoop was 
using it in the past and has moved to classifiers, but it should be working 
without need to have special version for testing artifact compiled against 
hadoop2.

Jarcec

 Publish Hbase 0.94 artifacts build against hadoop-2.0
 -

 Key: HBASE-6929
 URL: https://issues.apache.org/jira/browse/HBASE-6929
 Project: HBase
  Issue Type: Task
  Components: build
Affects Versions: 0.94.2
Reporter: Enis Soztutar
 Attachments: 6929.txt, hbase-6929_v2.patch


 Downstream projects (flume, hive, pig, etc) depends on hbase, but since the 
 hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot 
 depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should 
 also push hbase jars build with hadoop2.0 profile into maven, possibly with 
 version string like 0.94.2-hadoop2.0. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Status: Open  (was: Patch Available)

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Attachment: HBASE-6974-v4.patch

fixing a test failure

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch, HBASE-6974-v4.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Michael Drzal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Drzal updated HBASE-6974:
-

Status: Patch Available  (was: Open)

 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch, HBASE-6974-v4.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478427#comment-13478427
 ] 

Hadoop QA commented on HBASE-7005:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549566/Hbase-7005.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
82 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3069//console

This message is automatically generated.

 Upgrade Thrift lib to 0.9.0
 ---

 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor
 Attachments: Hbase-7005.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6948) shell create table script cannot handle split key which is expressed in raw bytes

2012-10-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478440#comment-13478440
 ] 

Hudson commented on HBASE-6948:
---

Integrated in HBase-TRUNK #3453 (See 
[https://builds.apache.org/job/HBase-TRUNK/3453/])
HBASE-6948 shell create table script cannot handle split key which is 
expressed in raw bytes (Tianying) (Revision 1399429)

 Result = FAILURE
tedyu : 
Files : 
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb


 shell create table script cannot handle split key which is expressed in raw 
 bytes
 -

 Key: HBASE-6948
 URL: https://issues.apache.org/jira/browse/HBASE-6948
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.2
Reporter: Ted Yu
Assignee: Tianying Chang
 Fix For: 0.96.0

 Attachments: HBASE-6948.patch, HBASE-6948-trunk.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7005) Upgrade Thrift lib to 0.9.0

2012-10-17 Thread Jake Farrell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478450#comment-13478450
 ] 

Jake Farrell commented on HBASE-7005:
-

Patch used existing test cases as it was only upgrading an existing library. 
Ran test cases and all passed including the TestZooKeeperTableArchiveClient

Running org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.909 sec

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] HBase . SUCCESS [1.786s]
[INFO] HBase - Common  SUCCESS [7.448s]
[INFO] HBase - Hadoop Compatibility .. SUCCESS [0.497s]
[INFO] HBase - Hadoop One Compatibility .. SUCCESS [0.988s]
[INFO] HBase - Server  SUCCESS [40:31.865s]
[INFO] HBase - Hadoop Two Compatibility .. SUCCESS [6.360s]
[INFO] HBase - Integration Tests . SUCCESS [1.534s]
[INFO] 
[INFO] BUILD SUCCESS


 Upgrade Thrift lib to 0.9.0
 ---

 Key: HBASE-7005
 URL: https://issues.apache.org/jira/browse/HBASE-7005
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Jake Farrell
Priority: Minor
 Attachments: Hbase-7005.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6974) Metric for blocked updates

2012-10-17 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478465#comment-13478465
 ] 

Lars Hofhansl commented on HBASE-6974:
--

+1 on v4 
(super-duper minor nit: the blocked flag can be pulled into the if statement in 
the memstore flusher - I'll do that on commit, no need for a new patch).

Thanks Drz!


 Metric for blocked updates
 --

 Key: HBASE-6974
 URL: https://issues.apache.org/jira/browse/HBASE-6974
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Michael Drzal
Priority: Critical
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-6974.patch, HBASE-6974-v2.patch, 
 HBASE-6974-v3.patch, HBASE-6974-v4.patch


 When the disc subsystem cannot keep up with a sustained high write load, a 
 region will eventually block updates to throttle clients.
 (HRegion.checkResources).
 It would be nice to have a metric for this, so that these occurrences can be 
 tracked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >