[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Attachment: HBASE-9488-trunkV2.patch

> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
> results.jpg
>
>
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> patched on 0.94 version
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
> *Usage:*
> Scan scan = new Scan(startRow,stopRow);
> scan.setSmall(true);
> ResultScanner scanner = table.getScanner(scan);
> Set the new 'small' attribute as true for scan, others are the same
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Description: 
review board:

https://reviews.apache.org/r/14059/


Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
patched on 0.94 version
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 

  was:
Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
patched on 0.94 version
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 


> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
> results.jpg
>
>
> review board:
> https://reviews.apache.org/r/14059/
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> patched on 0.94 version
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
> *Usage:*
> Scan scan = new Scan(startRow,stopRow);
> scan.setSmall(true);
> ResultScanner scanner = table.getScanner(scan);
> Set the new 'small' attribute as true for scan, others are the same
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762800#comment-13762800
 ] 

chunhui shen commented on HBASE-9488:
-

bq.we instead pass one arg 'boolean shortScan'. 
In the method  HStore#getScanners, 
{format}storeFilesToScan 
=this.storeEngine.getStoreFileManager().getFilesForScanOrGet(isGet, startRow, 
stopRow);{format}
The arg 'isGet' is used, thus need a new arg to specify whether using pread

bq.Is this caching location? Will we cache a location across changes? i.e. 
changes in location for the HRegionInfo?
Sure, it use current client region cache mechanism

bq.Does this have to public +public class ClientSmallScanner extends 
AbstractClientScanner {?
Existed ClientScanner is also public, keep the same with it

bq.You should instead say that the amount of data should be small and inside 
the one region.
If the scan range is within one data block, it could be considered as a small 
scan


bq.Should the Scan check that the stoprow is inside a single region and fail if 
not?
Now, I hope it is controlled by user. e.g. if the scan cross multi regions, but 
only scan two rows, in that case, small scan also be better.


Improve the javadoc of Scan#small in patch-V2

review board:

https://reviews.apache.org/r/14059/



> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, test results.jpg
>
>
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> patched on 0.94 version
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
> *Usage:*
> Scan scan = new Scan(startRow,stopRow);
> scan.setSmall(true);
> ResultScanner scanner = table.getScanner(scan);
> Set the new 'small' attribute as true for scan, others are the same
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762799#comment-13762799
 ] 

Hadoop QA commented on HBASE-9482:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602297/HBASE-9482.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7112//console

This message is automatically generated.

> Do not enforce secure Hadoop for secure HBase
> -
>
> Key: HBASE-9482
> URL: https://issues.apache.org/jira/browse/HBASE-9482
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.95.2, 0.94.11
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>  Labels: security
> Fix For: 0.96.0
>
> Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch
>
>
> We should recommend and not enforce secure Hadoop underneath as a requirement 
> to run secure HBase.
> Few of our customers have HBase clusters which expose only HBase services to 
> outside the physical network and no other services (including ssh) are 
> accessible from outside of such cluster.
> However they are forced to setup secure Hadoop and incur the penalty of 
> security overhead at filesystem layer even if they do not need to.
> The following code tests for both secure HBase and secure Hadoop.
> {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
>   /**
>* Returns whether or not secure authentication is enabled for HBase.  Note 
> that
>* HBase security requires HDFS security to provide any guarantees, so this 
> requires that
>* both hbase.security.authentication and 
> hadoop.security.authentication
>* are set to kerberos.
>*/
>   public static boolean isHBaseSecurityEnabled(Configuration conf) {
> return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) &&
> "kerberos".equalsIgnoreCase(
> conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
>   }
> {code}
> What is worse that if {{"hadoop.security.authentication"}} is not set to 
> {{"kerberos"}} (undocumented at http://hbase.apache.org/book/security.html), 
> all other configuration have no impact and HBase RPCs silently switch back to 
> unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-09 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Fix Version/s: 0.96.0
Affects Version/s: 0.95.2
   0.94.11
   Status: Patch Available  (was: Open)

> Do not enforce secure Hadoop for secure HBase
> -
>
> Key: HBASE-9482
> URL: https://issues.apache.org/jira/browse/HBASE-9482
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.94.11, 0.95.2
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>  Labels: security
> Fix For: 0.96.0
>
> Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch
>
>
> We should recommend and not enforce secure Hadoop underneath as a requirement 
> to run secure HBase.
> Few of our customers have HBase clusters which expose only HBase services to 
> outside the physical network and no other services (including ssh) are 
> accessible from outside of such cluster.
> However they are forced to setup secure Hadoop and incur the penalty of 
> security overhead at filesystem layer even if they do not need to.
> The following code tests for both secure HBase and secure Hadoop.
> {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
>   /**
>* Returns whether or not secure authentication is enabled for HBase.  Note 
> that
>* HBase security requires HDFS security to provide any guarantees, so this 
> requires that
>* both hbase.security.authentication and 
> hadoop.security.authentication
>* are set to kerberos.
>*/
>   public static boolean isHBaseSecurityEnabled(Configuration conf) {
> return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) &&
> "kerberos".equalsIgnoreCase(
> conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
>   }
> {code}
> What is worse that if {{"hadoop.security.authentication"}} is not set to 
> {{"kerberos"}} (undocumented at http://hbase.apache.org/book/security.html), 
> all other configuration have no impact and HBase RPCs silently switch back to 
> unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-09 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Attachment: HBASE-9482.patch

Patch for trunk.

> Do not enforce secure Hadoop for secure HBase
> -
>
> Key: HBASE-9482
> URL: https://issues.apache.org/jira/browse/HBASE-9482
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>  Labels: security
> Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch
>
>
> We should recommend and not enforce secure Hadoop underneath as a requirement 
> to run secure HBase.
> Few of our customers have HBase clusters which expose only HBase services to 
> outside the physical network and no other services (including ssh) are 
> accessible from outside of such cluster.
> However they are forced to setup secure Hadoop and incur the penalty of 
> security overhead at filesystem layer even if they do not need to.
> The following code tests for both secure HBase and secure Hadoop.
> {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
>   /**
>* Returns whether or not secure authentication is enabled for HBase.  Note 
> that
>* HBase security requires HDFS security to provide any guarantees, so this 
> requires that
>* both hbase.security.authentication and 
> hadoop.security.authentication
>* are set to kerberos.
>*/
>   public static boolean isHBaseSecurityEnabled(Configuration conf) {
> return "kerberos".equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) &&
> "kerberos".equalsIgnoreCase(
> conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
>   }
> {code}
> What is worse that if {{"hadoop.security.authentication"}} is not set to 
> {{"kerberos"}} (undocumented at http://hbase.apache.org/book/security.html), 
> all other configuration have no impact and HBase RPCs silently switch back to 
> unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler

2013-09-09 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762774#comment-13762774
 ] 

Chao Shi commented on HBASE-8884:
-

stack, could you please explain a little bit more on "pooling of buffers across 
requests". I don't quite understand. In fact, the very first rationale for us 
to introduce pluggable RpcScheduler, is that we want to isolate read and write 
ops. So we can simply write a RpcScheduler with two thread-pools. My case is 
pretty easy, and I'm interested to listen about your case.

> Pluggable RpcScheduler
> --
>
> Key: HBASE-8884
> URL: https://issues.apache.org/jira/browse/HBASE-8884
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0
>
> Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
> hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
> hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch
>
>
> Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
> isolated thread-pools based on their priority. In the current implementation, 
> all normal get/put requests are using the same pool. We'd like to add some 
> per-user or per-region level isolation, so that a misbehaved user/region will 
> not saturate the thread-pool and cause DoS to others easily. The idea is 
> similar to FairScheduler in MR. The current scheduling code is not standalone 
> and is mixed with others (Connection#processRequest). The issue is the first 
> step to extract it to an interface, so that people are free to write and test 
> their own implementations.
> This patch doesn't make it completely pluggable yet, as some parameters are 
> pass from constructor. This is because HMaster and HRegionServer both use 
> RpcServer and they have different thread-pool size config. Let me know if you 
> have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762772#comment-13762772
 ] 

Hudson commented on HBASE-8930:
---

FAILURE: Integrated in HBase-0.94 #1145 (See 
[https://builds.apache.org/job/HBase-0.94/1145/])
HBASE-8930 REAPPLY with testfix (larsh: rev 1521356)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SU

[jira] [Commented] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762757#comment-13762757
 ] 

stack commented on HBASE-9481:
--

[~jxiang] Any danger you see here when the gap between what is in zk and what 
is in-memory state is widened?  Could we use different datastructure here?  One 
that is tolerant of concurrent mods (i suppose copy-on-write is no longer in 
style after Elliott's recent experience).

> Servershutdown handler get aborted with ConcurrentModificationException
> ---
>
> Key: HBASE-9481
> URL: https://issues.apache.org/jira/browse/HBASE-9481
> Project: HBase
>  Issue Type: Bug
>  Components: MTTR
>Affects Versions: 0.96.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9481.patch
>
>
> In integration tests, we found SSH got aborted with following stack trace:
> {code}
> 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
> processing event M_SERVER_SHUTDOWN
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
> at java.util.HashMap$ValueIterator.next(HashMap.java:822)
> at 
> org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
> at 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9469) Synchronous replication

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762755#comment-13762755
 ] 

Feng Honghua commented on HBASE-9469:
-

[~lhofhansl] Yes, the better data safety/consistency of synchronous replication 
is gotten at the cost of higher latency. Maybe it's more acceptable to make it 
per-peer/per-table configurable, let me try to provide a patch accordingly

> Synchronous replication
> ---
>
> Key: HBASE-9469
> URL: https://issues.apache.org/jira/browse/HBASE-9469
> Project: HBase
>  Issue Type: New Feature
>Reporter: Feng Honghua
>Priority: Minor
>
> Scenario: 
> A/B clusters with master-master replication, client writes to A cluster and A 
> pushes all writes to B cluster, and when A cluster is down, client switches 
> writing to B cluster.
> But the client's write switch is unsafe due to the replication between A/B is 
> asynchronous: a delete to B cluster which aims to delete a put written 
> earlier can fail due to that put is written to A cluster and isn't 
> successfully pushed to B before A is down. It can be worse if this delete is 
> collected(flush and then major compact occurs) before A cluster is up and 
> that put is eventually pushed to B, the put won't ever be deleted.
> Can we provide per-table/per-peer synchronous replication which ships the 
> according hlog entry of write before responsing write success to client? By 
> this we can guarantee the client that all write requests for which he got 
> success response when he wrote to A cluster must already have been in B 
> cluster as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762753#comment-13762753
 ] 

Lars Hofhansl commented on HBASE-9490:
--

Sharing the small tests was by design. [~nkeywal] made it so for performance. 
Let's not undo that.
If some of the tests cannot share a JVM they should be fixed or changed to 
medium tests.

> Provide independent execution environment for small tests
> -
>
> Key: HBASE-9490
> URL: https://issues.apache.org/jira/browse/HBASE-9490
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vasu Mariyala
>Assignee: Vasu Mariyala
> Attachments: 0.94-Independent-Test-Execution.patch, 
> 0.96-trunk-Independent-Test-Execution.patch
>
>
> Some of the state related to schema metrics is stored in static variables and 
> since the small test cases are run in a single jvm, it is causing random 
> behavior in the output of the tests.
> An example scenario is the test case failures in HBASE-8930
> {code}
> for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
> if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
>   throw new AssertionError("Column family prefix used twice: " +
>   metricName);
> }
> {code}
> The above code throws an error when the metric name starts with "cf.cf.". It 
> would be helpful if any one sheds some light on the reason behind checking 
> for "cf.cf."
> The scenarios in which we would have a metric name start with "cf.cf." are as 
> follows (See generateSchemaMetricsPrefix method of SchemaMetrics)
> a) The column family name should be "cf"
> AND
> b) The table name is either "" or use table name globally should be false 
> (useTableNameGlobally variable of SchemaMetrics).
> Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
> column family as "". So we could rule out the
> possibility of the table name being empty.
> Also to note, the variables "useTableNameGlobally" and 
> "tableAndFamilyToMetrics" of SchemaMetrics are static and are shared across 
> all the tests that run in a single jvm. In our case, the profile runAllTests 
> has the below configuration
> {code}
> once
> none
> 1
>   
> org.apache.hadoop.hbase.SmallTests
> {code}
> Hence all of our small tests run in a single jvm and share the above 
> variables "useTableNameGlobally" and "tableAndFamilyToMetrics".
> The reasons why the order of execution of the tests caused this failure are 
> as follows
> a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
> useTableNameGlobally to false. But these tests don't create tables that have 
> the column family name as "cf".
> b) If the tests in step (a) run before the tests which create table/regions 
> with column family 'cf', metric names would start with "cf.cf."
> c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
> TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema 
> metrics, they would fail as the metric names start with "cf.cf."
> On my local machine, I have tried to re-create the failure scenario by 
> changing the sure fire test configuration and creating a simple (TestSimple) 
> which just creates a region for the table 'testtable' and column family 'cf'.
> {code}
> TestSimple.java
> --
>   @Before
>   public void setUp() throws Exception {
> HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
> htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
> HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
> this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
> TEST_UTIL.getConfiguration(), htd);
> Put put = new Put(ROW_BYTES);
> for (int i = 0; i < 10; i += 2) {
>   // puts 0, 2, 4, 6 and 8
>   put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
>   Bytes.toBytes(VALUE_PREFIX + i));
> }
> this.region.put(put);
> this.region.flushcache();
>   }
>   @Test
>   public void testFilterInvocation() throws Exception {
> System.out.println("testing");
>   }
>   @After
>   public void tearDown() throws Exception {
> HLog hlog = region.getLog();
> region.close();
> hlog.closeAndDelete();
>   }
> Successful run:
> ---
>  T E S T S
> ---
> 2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info 
> from SCDynamicStore
> Running org.apache.hadoop.hbase.filter.TestSimple
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec
> Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
> Tests

[jira] [Commented] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762754#comment-13762754
 ] 

Feng Honghua commented on HBASE-9468:
-

[~stack] OK, I'll provide a patch according to the comment

> Previous active master can still serves RPC request when it is trying 
> recovering expired zk session
> ---
>
> Key: HBASE-9468
> URL: https://issues.apache.org/jira/browse/HBASE-9468
> Project: HBase
>  Issue Type: Bug
>Reporter: Feng Honghua
>
> When the active master's zk session expires, it'll try to recover zk session, 
> but without turn off its RpcServer. What if a previous backup master has 
> already become the now active master, and some client tries to send request 
> to this expired master by using the cached master info? Any problem here?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-9490:
-

Attachment: 0.96-trunk-Independent-Test-Execution.patch
0.94-Independent-Test-Execution.patch

> Provide independent execution environment for small tests
> -
>
> Key: HBASE-9490
> URL: https://issues.apache.org/jira/browse/HBASE-9490
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vasu Mariyala
>Assignee: Vasu Mariyala
> Attachments: 0.94-Independent-Test-Execution.patch, 
> 0.96-trunk-Independent-Test-Execution.patch
>
>
> Some of the state related to schema metrics is stored in static variables and 
> since the small test cases are run in a single jvm, it is causing random 
> behavior in the output of the tests.
> An example scenario is the test case failures in HBASE-8930
> {code}
> for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
> if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
>   throw new AssertionError("Column family prefix used twice: " +
>   metricName);
> }
> {code}
> The above code throws an error when the metric name starts with "cf.cf.". It 
> would be helpful if any one sheds some light on the reason behind checking 
> for "cf.cf."
> The scenarios in which we would have a metric name start with "cf.cf." are as 
> follows (See generateSchemaMetricsPrefix method of SchemaMetrics)
> a) The column family name should be "cf"
> AND
> b) The table name is either "" or use table name globally should be false 
> (useTableNameGlobally variable of SchemaMetrics).
> Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
> column family as "". So we could rule out the
> possibility of the table name being empty.
> Also to note, the variables "useTableNameGlobally" and 
> "tableAndFamilyToMetrics" of SchemaMetrics are static and are shared across 
> all the tests that run in a single jvm. In our case, the profile runAllTests 
> has the below configuration
> {code}
> once
> none
> 1
>   
> org.apache.hadoop.hbase.SmallTests
> {code}
> Hence all of our small tests run in a single jvm and share the above 
> variables "useTableNameGlobally" and "tableAndFamilyToMetrics".
> The reasons why the order of execution of the tests caused this failure are 
> as follows
> a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
> useTableNameGlobally to false. But these tests don't create tables that have 
> the column family name as "cf".
> b) If the tests in step (a) run before the tests which create table/regions 
> with column family 'cf', metric names would start with "cf.cf."
> c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
> TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema 
> metrics, they would fail as the metric names start with "cf.cf."
> On my local machine, I have tried to re-create the failure scenario by 
> changing the sure fire test configuration and creating a simple (TestSimple) 
> which just creates a region for the table 'testtable' and column family 'cf'.
> {code}
> TestSimple.java
> --
>   @Before
>   public void setUp() throws Exception {
> HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
> htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
> HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
> this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
> TEST_UTIL.getConfiguration(), htd);
> Put put = new Put(ROW_BYTES);
> for (int i = 0; i < 10; i += 2) {
>   // puts 0, 2, 4, 6 and 8
>   put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
>   Bytes.toBytes(VALUE_PREFIX + i));
> }
> this.region.put(put);
> this.region.flushcache();
>   }
>   @Test
>   public void testFilterInvocation() throws Exception {
> System.out.println("testing");
>   }
>   @After
>   public void tearDown() throws Exception {
> HLog hlog = region.getLog();
> region.close();
> hlog.closeAndDelete();
>   }
> Successful run:
> ---
>  T E S T S
> ---
> 2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info 
> from SCDynamicStore
> Running org.apache.hadoop.hbase.filter.TestSimple
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec
> Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec
> Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange

[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split

2013-09-09 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9249:
--

Attachment: HBASE-9249_v8.patch

Rebased patch as for the latest code.

> Add cp hook before setting PONR in split
> 
>
> Key: HBASE-9249
> URL: https://issues.apache.org/jira/browse/HBASE-9249
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.98.0
>
> Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, 
> HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, 
> HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch, 
> HBASE-9249_v8.patch
>
>
> This hook helps to perform split on user region and corresponding index 
> region such that both will be split or none.
> With this hook split for user and index region as follows
> user region
> ===
> 1) Create splitting znode for user region split
> 2) Close parent user region
> 3) split user region storefiles
> 4) instantiate child regions of user region
> Through the new hook we can call index region transitions as below
> index region
> ===
> 5) Create splitting znode for index region split
> 6) Close parent index region
> 7) Split storefiles of index region
> 8) instantiate child regions of the index region
> If any failures in 5,6,7,8 rollback the steps and return null, on null return 
> throw exception to rollback for 1,2,3,4
> 9) set PONR
> 10) do batch put of offline and split entries for user and index regions
> index region
> 
> 11) open daughers of index regions and transition znode to split. This step 
> we will do through preSplitAfterPONR hook. Opening index regions before 
> opening user regions helps to avoid put failures if there is colocation 
> mismatch(this can happen if user regions opening completed but index regions 
> opening in progress)
> user region
> ===
> 12) open daughers of user regions and transition znode to split.
> Even if region server crashes also at the end both user and index regions 
> will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762748#comment-13762748
 ] 

Vasu Mariyala commented on HBASE-8930:
--

No, what ever has been checked in is correct. I just re-attached the patch rev6 
to let hadoop qa run the precommit again as the test failures reported in the 
earlier email are not related to this patch.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762747#comment-13762747
 ] 

Lars Hofhansl commented on HBASE-8930:
--

NM. rev6 has the "mycf" fix in it. The other patches have it too. All good.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
>

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762745#comment-13762745
 ] 

Lars Hofhansl commented on HBASE-8930:
--

how does rev6 differ from rev5? I committed all the latest now. Do the 0.94 and 
0.96 patches need an update, [~vasu.mariy...@gmail.com]?

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
>   

[jira] [Commented] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762742#comment-13762742
 ] 

Vasu Mariyala commented on HBASE-9490:
--

The following are the solutions

a) Change the category of the tests which change the static variables to medium 
category ([~lhofhansl] suggestion)

b) Change the tests to run in different jvm's. (3:05.855s vs 5:09.840s on my 
local machine). Currently with -PlocalTests, it always runs the small tests in 
a different jvm. So the time increase would mostly be in the build machine.

> Provide independent execution environment for small tests
> -
>
> Key: HBASE-9490
> URL: https://issues.apache.org/jira/browse/HBASE-9490
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vasu Mariyala
>Assignee: Vasu Mariyala
>
> Some of the state related to schema metrics is stored in static variables and 
> since the small test cases are run in a single jvm, it is causing random 
> behavior in the output of the tests.
> An example scenario is the test case failures in HBASE-8930
> {code}
> for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
> if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
>   throw new AssertionError("Column family prefix used twice: " +
>   metricName);
> }
> {code}
> The above code throws an error when the metric name starts with "cf.cf.". It 
> would be helpful if any one sheds some light on the reason behind checking 
> for "cf.cf."
> The scenarios in which we would have a metric name start with "cf.cf." are as 
> follows (See generateSchemaMetricsPrefix method of SchemaMetrics)
> a) The column family name should be "cf"
> AND
> b) The table name is either "" or use table name globally should be false 
> (useTableNameGlobally variable of SchemaMetrics).
> Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
> column family as "". So we could rule out the
> possibility of the table name being empty.
> Also to note, the variables "useTableNameGlobally" and 
> "tableAndFamilyToMetrics" of SchemaMetrics are static and are shared across 
> all the tests that run in a single jvm. In our case, the profile runAllTests 
> has the below configuration
> {code}
> once
> none
> 1
>   
> org.apache.hadoop.hbase.SmallTests
> {code}
> Hence all of our small tests run in a single jvm and share the above 
> variables "useTableNameGlobally" and "tableAndFamilyToMetrics".
> The reasons why the order of execution of the tests caused this failure are 
> as follows
> a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
> useTableNameGlobally to false. But these tests don't create tables that have 
> the column family name as "cf".
> b) If the tests in step (a) run before the tests which create table/regions 
> with column family 'cf', metric names would start with "cf.cf."
> c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
> TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema 
> metrics, they would fail as the metric names start with "cf.cf."
> On my local machine, I have tried to re-create the failure scenario by 
> changing the sure fire test configuration and creating a simple (TestSimple) 
> which just creates a region for the table 'testtable' and column family 'cf'.
> {code}
> TestSimple.java
> --
>   @Before
>   public void setUp() throws Exception {
> HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
> htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
> HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
> this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
> TEST_UTIL.getConfiguration(), htd);
> Put put = new Put(ROW_BYTES);
> for (int i = 0; i < 10; i += 2) {
>   // puts 0, 2, 4, 6 and 8
>   put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
>   Bytes.toBytes(VALUE_PREFIX + i));
> }
> this.region.put(put);
> this.region.flushcache();
>   }
>   @Test
>   public void testFilterInvocation() throws Exception {
> System.out.println("testing");
>   }
>   @After
>   public void tearDown() throws Exception {
> HLog hlog = region.getLog();
> region.close();
> hlog.closeAndDelete();
>   }
> Successful run:
> ---
>  T E S T S
> ---
> 2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info 
> from SCDynamicStore
> Running org.apache.hadoop.hbase.filter.TestSimple
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elap

[jira] [Comment Edited] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762731#comment-13762731
 ] 

Lars Hofhansl edited comment on HBASE-8930 at 9/10/13 5:21 AM:
---

Re-applied. Thanks Vasu.

Edit: I cannot spell.

  was (Author: lhofhansl):
Rapplied. Thanks Vasu.
  
> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col

[jira] [Commented] (HBASE-8496) Implement tags and the internals of how a tag should look like

2013-09-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762743#comment-13762743
 ] 

ramkrishna.s.vasudevan commented on HBASE-8496:
---

Updated the RB Posted a new RB https://reviews.apache.org/r/13311/.
This changes has 
Tags with V3, HFileContext changes and also makes tags optional in V3.
All testcases passes.  Ran the PE and LoadTestTool in a single machine and a 
cluster with 4 nodes.
Ensured that HFiles with Version 2 can be read back with Verions 3 by switching 
versions.  Request you to provide feedback/reviews so that we can take this 
into 0.98.

> Implement tags and the internals of how a tag should look like
> --
>
> Key: HBASE-8496
> URL: https://issues.apache.org/jira/browse/HBASE-8496
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.98.0, 0.95.2
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Attachments: Comparison.pdf, HBASE-8496_2.patch, HBASE-8496.patch, 
> Tag design.pdf, Tag design_updated.pdf, Tag_In_KV_Buffer_For_reference.patch
>
>
> The intent of this JIRA comes from HBASE-7897.
> This would help us to decide on the structure and format of how the tags 
> should look like. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9490) Provide independent execution environment for small tests

2013-09-09 Thread Vasu Mariyala (JIRA)
Vasu Mariyala created HBASE-9490:


 Summary: Provide independent execution environment for small tests
 Key: HBASE-9490
 URL: https://issues.apache.org/jira/browse/HBASE-9490
 Project: HBase
  Issue Type: Improvement
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala


Some of the state related to schema metrics is stored in static variables and 
since the small test cases are run in a single jvm, it is causing random 
behavior in the output of the tests.

An example scenario is the test case failures in HBASE-8930

{code}

for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
  throw new AssertionError("Column family prefix used twice: " +
  metricName);
}

{code}

The above code throws an error when the metric name starts with "cf.cf.". It 
would be helpful if any one sheds some light on the reason behind checking for 
"cf.cf."

The scenarios in which we would have a metric name start with "cf.cf." are as 
follows (See generateSchemaMetricsPrefix method of SchemaMetrics)

a) The column family name should be "cf"

AND

b) The table name is either "" or use table name globally should be false 
(useTableNameGlobally variable of SchemaMetrics).
Table name is empty only in the case of ALL_SCHEMA_METRICS which has the column 
family as "". So we could rule out the
possibility of the table name being empty.

Also to note, the variables "useTableNameGlobally" and 
"tableAndFamilyToMetrics" of SchemaMetrics are static and are shared across all 
the tests that run in a single jvm. In our case, the profile runAllTests has 
the below configuration

{code}
once
none
1
  
org.apache.hadoop.hbase.SmallTests

{code}

Hence all of our small tests run in a single jvm and share the above variables 
"useTableNameGlobally" and "tableAndFamilyToMetrics".

The reasons why the order of execution of the tests caused this failure are as 
follows

a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
useTableNameGlobally to false. But these tests don't create tables that have 
the column family name as "cf".

b) If the tests in step (a) run before the tests which create table/regions 
with column family 'cf', metric names would start with "cf.cf."

c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema metrics, 
they would fail as the metric names start with "cf.cf."

On my local machine, I have tried to re-create the failure scenario by changing 
the sure fire test configuration and creating a simple (TestSimple) which just 
creates a region for the table 'testtable' and column family 'cf'.

{code}
TestSimple.java
--
  @Before
  public void setUp() throws Exception {
HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
TEST_UTIL.getConfiguration(), htd);

Put put = new Put(ROW_BYTES);
for (int i = 0; i < 10; i += 2) {
  // puts 0, 2, 4, 6 and 8
  put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
  Bytes.toBytes(VALUE_PREFIX + i));
}
this.region.put(put);
this.region.flushcache();
  }

  @Test
  public void testFilterInvocation() throws Exception {
System.out.println("testing");
  }

  @After
  public void tearDown() throws Exception {
HLog hlog = region.getLog();
region.close();
hlog.closeAndDelete();
  }

Successful run:

---
 T E S T S
---
2013-09-09 15:38:03.478 java[46562:db03] Unable to load realm mapping info from 
SCDynamicStore
Running org.apache.hadoop.hbase.filter.TestSimple
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.342 sec
Running org.apache.hadoop.hbase.io.hfile.TestHFileReaderV1
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingKeyRange
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec
Running org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.618 sec
Running org.apache.hadoop.hbase.regionserver.TestMemStore
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.542 sec

Results :

Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
--

Failed run order:

--

[jira] [Comment Edited] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762731#comment-13762731
 ] 

Lars Hofhansl edited comment on HBASE-8930 at 9/10/13 5:20 AM:
---

Rapplied. Thanks Vasu.

  was (Author: lhofhansl):
Repplied. Thanks Vasu.
  
> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short)

[jira] [Commented] (HBASE-9245) Remove dead or deprecated code from hbase 0.96

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762741#comment-13762741
 ] 

stack commented on HBASE-9245:
--

bq. Should there be a sub-JIRA to brainstorm how people would build against 
both 94 and 96? Would projects like Hive, Mahout, etc. need HBase shim similar 
to Hadoop?

Yes [~sershe]. I'd be interested in this one.  Should be recommendations, list 
of issues.  Hopefully it will involve the amount of work hbase project went 
through making it so we could do hadoop1 and hadoop2.

> Remove dead or deprecated code from hbase 0.96
> --
>
> Key: HBASE-9245
> URL: https://issues.apache.org/jira/browse/HBASE-9245
> Project: HBase
>  Issue Type: Bug
>Reporter: Jonathan Hsieh
>
> This is an umbrella issue that will cover the removal or refactoring of 
> dangling dead code and cruft.  Some can make it into 0.96, some may have to 
> wait for an 0.98.  The "great culling" of code will be grouped patches that 
> are logically related.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9377) Backport HBASE- 9208 "ReplicationLogCleaner slow at large scale"

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762740#comment-13762740
 ] 

Lars Hofhansl commented on HBASE-9377:
--

Actually. All the involved classes and interface and marked with 
InterfaceAudience.private. So we can just change them anyway.

Please let me know if anybody has any objections to my slightly modified patch.

> Backport HBASE- 9208 "ReplicationLogCleaner slow at large scale"
> 
>
> Key: HBASE-9377
> URL: https://issues.apache.org/jira/browse/HBASE-9377
> Project: HBase
>  Issue Type: Task
>  Components: Replication
>Reporter: stack
>Assignee: Lars Hofhansl
> Fix For: 0.94.12
>
> Attachments: 9377.txt
>
>
> For [~lhofhansl] to make a  call on.  See end where Dave Latham talks about 
> issues w/ patch in 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9271) Doc the major differences between 0.94 and 0.96; a distillation of release notes for those w/ limited attention

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762738#comment-13762738
 ] 

stack commented on HBASE-9271:
--

TODO: Steal this nice writeup of [~jmhsieh]'s on "Why Cell" as part of the 
high-level summary of major differences: 
https://issues.apache.org/jira/browse/HBASE-9245?focusedCommentId=13762140&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13762140

> Doc the major differences between 0.94 and 0.96; a distillation of release 
> notes for those w/ limited attention
> ---
>
> Key: HBASE-9271
> URL: https://issues.apache.org/jira/browse/HBASE-9271
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: stack
> Fix For: 0.96.0
>
>
> HBASE-8450 changes base configs in some ways that may be surprising.  We 
> should mnention this in any release note distillation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9487) create_namespace with property value throws error

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762734#comment-13762734
 ] 

stack commented on HBASE-9487:
--

+1

> create_namespace with property value throws error
> -
>
> Key: HBASE-9487
> URL: https://issues.apache.org/jira/browse/HBASE-9487
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-9487_v1.patch
>
>
> Creating a namespace with properties fails from shell: 
> {code}
> hbase(main):002:0> create_namespace 'ns1',{'PROERTY_NAME'=>'PROPERTY_VALUE'}
> ERROR: undefined method `addProperty' for 
> #
> Here is some help for this command:
> Create namespace; pass namespace name,
> and optionally a dictionary of namespace configuration.
> Examples:
> hbase> create_namespace 'ns1'
> hbase> create_namespace 'ns1', {'PROERTY_NAME'=>'PROPERTY_VALUE'}
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762730#comment-13762730
 ] 

stack commented on HBASE-9488:
--

[~zjushch] Doesn't Get do a 'short scan' using pread?  Should we change the 
args passed so rather than 'boolean isGet, boolean usePread', we instead pass 
one arg 'boolean shortScan'. 

Is this caching location?

+  public HRegionInfo getHRegionInfo() {
+if (this.location == null) {
+  return null;
+}
+return this.location.getRegionInfo();

Will we cache a location across changes?  i.e. changes in location for the 
HRegionInfo?

Does this have to public +public class ClientSmallScanner extends 
AbstractClientScanner {?

You have a better explanation of what the limitations are elsewhere in your 
patch than this which you have as javadoc:

{code}
+   * This is false by default which means use seek + read. If set this to true,
+   * the server will use pread.
{code}

You should instead say that the amount of data should be small and inside the 
one region.  That we do pread should be incidental info.

Should the Scan check that the stoprow is inside a single region and fail if 
not?  Or just fall back to old behavior?  That is what we do?

I skimmed the rest.  Looks good.

> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, test results.jpg
>
>
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> patched on 0.94 version
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
> *Usage:*
> Scan scan = new Scan(startRow,stopRow);
> scan.setSmall(true);
> ResultScanner scanner = table.getScanner(scan);
> Set the new 'small' attribute as true for scan, others are the same
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9456) Meta doesn't get assigned in a master failure scenario

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762732#comment-13762732
 ] 

stack commented on HBASE-9456:
--

+1

Seems low risk.

> Meta doesn't get assigned in a master failure scenario
> --
>
> Key: HBASE-9456
> URL: https://issues.apache.org/jira/browse/HBASE-9456
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: 9456-1.txt, 9456-2.txt
>
>
> The flow:
> 1. Cluster is up, meta is assigned to some server
> 2. Master is killed
> 3. Master is brought up, it is initializing. It learns about the Meta server 
> (in assignMeta).
> 4. Server holding meta is killed
> 5. Meta never gets reassigned since the SSH wasn't enabled

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-8930:
-

   Resolution: Fixed
Fix Version/s: (was: 0.94.13)
   0.94.12
   Status: Resolved  (was: Patch Available)

Repplied. Thanks Vasu.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Byte

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Status: Patch Available  (was: Open)

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Status: Open  (was: Patch Available)

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev6.patch

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: (was: HBASE-8930-rev6.patch)

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flus

[jira] [Commented] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762725#comment-13762725
 ] 

Hadoop QA commented on HBASE-9485:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602269/9485-v1.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.rest.client.TestRemoteTable.testGet(TestRemoteTable.java:125)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7110//console

This message is automatically generated.

> TableOutputCommitter should implement recovery if we don't want jobs to start 
> from 0 on RM restart
> --
>
> Key: HBASE-9485
> URL: https://issues.apache.org/jira/browse/HBASE-9485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9485-v1.txt
>
>
> HBase extends OutputCommitter which turns recovery off. Meaning all completed 
> maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
> implements recovery so we should look at that to see what is potentially 
> needed for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762724#comment-13762724
 ] 

Lars Hofhansl commented on HBASE-9485:
--

Good catch. Want this in 0.94 for sure.

> TableOutputCommitter should implement recovery if we don't want jobs to start 
> from 0 on RM restart
> --
>
> Key: HBASE-9485
> URL: https://issues.apache.org/jira/browse/HBASE-9485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9485-v1.txt
>
>
> HBase extends OutputCommitter which turns recovery off. Meaning all completed 
> maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
> implements recovery so we should look at that to see what is potentially 
> needed for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9469) Synchronous replication

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762722#comment-13762722
 ] 

Lars Hofhansl commented on HBASE-9469:
--

I doubt We'd get reasonable performance from that. Could be an optional 
feature. Patches are welcome :)


> Synchronous replication
> ---
>
> Key: HBASE-9469
> URL: https://issues.apache.org/jira/browse/HBASE-9469
> Project: HBase
>  Issue Type: New Feature
>Reporter: Feng Honghua
>Priority: Minor
>
> Scenario: 
> A/B clusters with master-master replication, client writes to A cluster and A 
> pushes all writes to B cluster, and when A cluster is down, client switches 
> writing to B cluster.
> But the client's write switch is unsafe due to the replication between A/B is 
> asynchronous: a delete to B cluster which aims to delete a put written 
> earlier can fail due to that put is written to A cluster and isn't 
> successfully pushed to B before A is down. It can be worse if this delete is 
> collected(flush and then major compact occurs) before A cluster is up and 
> that put is eventually pushed to B, the put won't ever be deleted.
> Can we provide per-table/per-peer synchronous replication which ships the 
> according hlog entry of write before responsing write success to client? By 
> this we can guarantee the client that all write requests for which he got 
> success response when he wrote to A cluster must already have been in B 
> cluster as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762721#comment-13762721
 ] 

stack commented on HBASE-9468:
--

Fail-fast and die-clean makes sense to me.  IIRC, the logic was added for the 
case of one Master only -- the most common config. on clusters.  At a minimum 
we should have a switch for this behavior.

> Previous active master can still serves RPC request when it is trying 
> recovering expired zk session
> ---
>
> Key: HBASE-9468
> URL: https://issues.apache.org/jira/browse/HBASE-9468
> Project: HBase
>  Issue Type: Bug
>Reporter: Feng Honghua
>
> When the active master's zk session expires, it'll try to recover zk session, 
> but without turn off its RpcServer. What if a previous backup master has 
> already become the now active master, and some client tries to send request 
> to this expired master by using the cached master info? Any problem here?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler

2013-09-09 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762717#comment-13762717
 ] 

stack commented on HBASE-8884:
--

As is, it is hard to follow what is going on.

+ Do we have to have a thread local up in the parent server class as the means 
of keeping context across the calls; can this not be in handler?  A context it 
gives each Call?
+ Does CallRunner have to be a Runnable?  You see how elsewhere in hbase we the 
notion of Callable?  It seems to have much what you introduce here with the 
CallRunner?  Would be good to have it all align.

Agree more refactor would help.  Thanks Chao Shi.  I want to be able to add 
pooling of buffers across requests.  At the moment it is difficult figuring 
how/where to insert.  Thanks.

> Pluggable RpcScheduler
> --
>
> Key: HBASE-8884
> URL: https://issues.apache.org/jira/browse/HBASE-8884
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0
>
> Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
> hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
> hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch
>
>
> Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
> isolated thread-pools based on their priority. In the current implementation, 
> all normal get/put requests are using the same pool. We'd like to add some 
> per-user or per-region level isolation, so that a misbehaved user/region will 
> not saturate the thread-pool and cause DoS to others easily. The idea is 
> similar to FairScheduler in MR. The current scheduling code is not standalone 
> and is mixed with others (Connection#processRequest). The issue is the first 
> step to extract it to an interface, so that people are free to write and test 
> their own implementations.
> This patch doesn't make it completely pluggable yet, as some parameters are 
> pass from constructor. This is because HMaster and HRegionServer both use 
> RpcServer and they have different thread-pool size config. Let me know if you 
> have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2013-09-09 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762719#comment-13762719
 ] 

Lars Hofhansl commented on HBASE-5954:
--

Nope. Software RAID.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0
>
> Attachments: 5954-trunk-hdfs-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
> 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
> 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9489) Add cp hooks in online merge before and after setting PONR

2013-09-09 Thread rajeshbabu (JIRA)
rajeshbabu created HBASE-9489:
-

 Summary: Add cp hooks in online merge before and after setting PONR
 Key: HBASE-9489
 URL: https://issues.apache.org/jira/browse/HBASE-9489
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
Assignee: rajeshbabu


As we need to merge index region along with user region we need the hooks in 
before and after setting PONR in region merge transtion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler

2013-09-09 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762706#comment-13762706
 ] 

Chao Shi commented on HBASE-8884:
-

Hi stack,

Thanks for your review comments.

bq. Shouldn't MONITORED_RPC be in Handler rather than kept as a thread local in 
RpcServer? Handler could give it to the CallRunner rather than have it jump 
hoops to get at its stashed instance?

The reason is to isolate logic of a RpcScheduler (implementation-specific) and 
RpcServer provides (common logic shared by all RpcScheduler implementations). 
Putting it into Handler is OK, but because that it has its own thread, it may 
not be convenient to incorporate with, for example ThreadPoolExecutor. I'm open 
for any better suggestions if you have.

I think we can continue to do some refactor work to get it clean and easy for 
understeanding.

> Pluggable RpcScheduler
> --
>
> Key: HBASE-8884
> URL: https://issues.apache.org/jira/browse/HBASE-8884
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC
>Reporter: Chao Shi
>Assignee: Chao Shi
> Fix For: 0.98.0
>
> Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
> hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
> hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch
>
>
> Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
> isolated thread-pools based on their priority. In the current implementation, 
> all normal get/put requests are using the same pool. We'd like to add some 
> per-user or per-region level isolation, so that a misbehaved user/region will 
> not saturate the thread-pool and cause DoS to others easily. The idea is 
> similar to FairScheduler in MR. The current scheduling code is not standalone 
> and is mixed with others (Connection#processRequest). The issue is the first 
> step to extract it to an interface, so that people are free to write and test 
> their own implementations.
> This patch doesn't make it completely pluggable yet, as some parameters are 
> pass from constructor. This is because HMaster and HRegionServer both use 
> RpcServer and they have different thread-pool size config. Let me know if you 
> have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9476) Yet more master log cleanup

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762696#comment-13762696
 ] 

Hudson commented on HBASE-9476:
---

FAILURE: Integrated in HBase-TRUNK #4483 (See 
[https://builds.apache.org/job/HBase-TRUNK/4483/])
HBASE-9476 Yet more master log cleanup (stack: rev 1521315)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java


> Yet more master log cleanup
> ---
>
> Key: HBASE-9476
> URL: https://issues.apache.org/jira/browse/HBASE-9476
> Project: HBase
>  Issue Type: Bug
>  Components: Usability
>Reporter: stack
>Assignee: stack
> Fix For: 0.98.0, 0.96.0
>
> Attachments: edits.txt
>
>
> Even more cleanup, tightening, of log output (was staring at some over the 
> last day..)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9484) Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762692#comment-13762692
 ] 

Hadoop QA commented on HBASE-9484:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12602267/0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 43 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7109//console

This message is automatically generated.

> Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96
> --
>
> Key: HBASE-9484
> URL: https://issues.apache.org/jira/browse/HBASE-9484
> Project: HBase
>  Issue Type: Test
>  Components: mapreduce, test
>Reporter: Nick Dimiduk
> Attachments: 
> 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9456) Meta doesn't get assigned in a master failure scenario

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762690#comment-13762690
 ] 

Hadoop QA commented on HBASE-9456:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602223/9456-2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7108//console

This message is automatically generated.

> Meta doesn't get assigned in a master failure scenario
> --
>
> Key: HBASE-9456
> URL: https://issues.apache.org/jira/browse/HBASE-9456
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: 9456-1.txt, 9456-2.txt
>
>
> The flow:
> 1. Cluster is up, meta is assigned to some server
> 2. Master is killed
> 3. Master is brought up, it is initializing. It learns about the Meta server 
> (in assignMeta).
> 4. Server holding meta is killed
> 5. Meta never gets reassigned since the SSH wasn't enabled

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9469) Synchronous replication

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762686#comment-13762686
 ] 

Feng Honghua commented on HBASE-9469:
-

[~jdcryans] and [~lhofhansl] : Any plan of synchronous replication? this is 
really a nice feature for applications requiring strict data safety/consistency 
across clusters

> Synchronous replication
> ---
>
> Key: HBASE-9469
> URL: https://issues.apache.org/jira/browse/HBASE-9469
> Project: HBase
>  Issue Type: New Feature
>Reporter: Feng Honghua
>Priority: Minor
>
> Scenario: 
> A/B clusters with master-master replication, client writes to A cluster and A 
> pushes all writes to B cluster, and when A cluster is down, client switches 
> writing to B cluster.
> But the client's write switch is unsafe due to the replication between A/B is 
> asynchronous: a delete to B cluster which aims to delete a put written 
> earlier can fail due to that put is written to A cluster and isn't 
> successfully pushed to B before A is down. It can be worse if this delete is 
> collected(flush and then major compact occurs) before A cluster is up and 
> that put is eventually pushed to B, the put won't ever be deleted.
> Can we provide per-table/per-peer synchronous replication which ships the 
> according hlog entry of write before responsing write success to client? By 
> this we can guarantee the client that all write requests for which he got 
> success response when he wrote to A cluster must already have been in B 
> cluster as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762684#comment-13762684
 ] 

Feng Honghua commented on HBASE-9468:
-

[~enis] I agree with you :-). not sure if there is further concern of the 
recovering logic for expired master.

> Previous active master can still serves RPC request when it is trying 
> recovering expired zk session
> ---
>
> Key: HBASE-9468
> URL: https://issues.apache.org/jira/browse/HBASE-9468
> Project: HBase
>  Issue Type: Bug
>Reporter: Feng Honghua
>
> When the active master's zk session expires, it'll try to recover zk session, 
> but without turn off its RpcServer. What if a previous backup master has 
> already become the now active master, and some client tries to send request 
> to this expired master by using the cached master info? Any problem here?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762683#comment-13762683
 ] 

Feng Honghua commented on HBASE-9468:
-

Sounds just fail-fast for the expired master is a quick and safe fix for this 
issue. Any opinion?

> Previous active master can still serves RPC request when it is trying 
> recovering expired zk session
> ---
>
> Key: HBASE-9468
> URL: https://issues.apache.org/jira/browse/HBASE-9468
> Project: HBase
>  Issue Type: Bug
>Reporter: Feng Honghua
>
> When the active master's zk session expires, it'll try to recover zk session, 
> but without turn off its RpcServer. What if a previous backup master has 
> already become the now active master, and some client tries to send request 
> to this expired master by using the cached master info? Any problem here?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter

2013-09-09 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762681#comment-13762681
 ] 

ramkrishna.s.vasudevan commented on HBASE-9359:
---

So you mean this issue?
{code}
Am getting this now
[ERROR] 
/home/ram/ycsb/YCSB/hbase/src/main/java/com/yahoo/ycsb/db/HBaseClient.java:[181,26]
 incompatible types
[ERROR] found   : org.apache.hadoop.hbase.Cell
[ERROR] required: org.apache.hadoop.hbase.KeyValue
[ERROR] 
/home/ram/ycsb/YCSB/hbase/src/main/java/com/yahoo/ycsb/db/HBaseClient.java:[255,41]
 incompatible types
[ERROR] found   : org.apache.hadoop.hbase.Cell
[ERROR] required: org.apache.hadoop.hbase.KeyValue

{code}
this is what am getting while trying to compile ycsb with latest code.

> Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, 
> ColumnInterpreter
> --
>
> Key: HBASE-9359
> URL: https://issues.apache.org/jira/browse/HBASE-9359
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, 
> hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, 
> hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch
>
>
> This path is the second half of eliminating KeyValue from the client 
> interfaces.  This percolated through quite a bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762680#comment-13762680
 ] 

Feng Honghua commented on HBASE-9467:
-

[~nkeywal] Can we provide a percentage config which means how big the sub-set 
of handler threads any region's requests can use? And for any region we can 
hash its region name to a determined start index of the handler thread array, 
and the percentage config together with the count the handler threads 
determines the count of the sub-array of the handler threads to serve this 
region's requests. This way any region at worst can only saturate its sub-set 
of handler threads without impacting all the handler threads, and hence 
somewhat mitigates the symptom

> write can be totally blocked temporarily by a write-heavy region
> 
>
> Key: HBASE-9467
> URL: https://issues.apache.org/jira/browse/HBASE-9467
> Project: HBase
>  Issue Type: Improvement
>Reporter: Feng Honghua
>Priority: Minor
>
> Write to a region can be blocked temporarily if the memstore of that region 
> reaches the threshold(hbase.hregion.memstore.block.multiplier * 
> hbase.hregion.flush.size) until the memstore of that region is flushed.
> For a write-heavy region, if its write requests saturates all the handler 
> threads of that RS when write blocking for that region occurs, requests of 
> other regions/tables to that RS also can't be served due to no available 
> handler threads...until the pending writes of that write-heavy region are 
> served after the flush is done. Hence during this time period, from the RS 
> perspective it can't serve any request from any table/region just due to a 
> single write-heavy region.
> This sounds not very reasonable, right? Maybe write requests from a region 
> can only be served by a sub-set of the handler threads, and then write 
> blocking of any single region can't lead to the scenario mentioned above?
> Comment?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Description: 
Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
patched on 0.94 version
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 

  was:
Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 


> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, test results.jpg
>
>
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> patched on 0.94 version
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
> *Usage:*
> Scan scan = new Scan(startRow,stopRow);
> scan.setSmall(true);
> ResultScanner scanner = table.getScanner(scan);
> Set the new 'small' attribute as true for scan, others are the same
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Description: 
Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 

  was:
Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment
 


> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, test results.jpg
>
>
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
> *Usage:*
> Scan scan = new Scan(startRow,stopRow);
> scan.setSmall(true);
> ResultScanner scanner = table.getScanner(scan);
> Set the new 'small' attribute as true for scan, others are the same
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9466) Read-only mode

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762675#comment-13762675
 ] 

Feng Honghua commented on HBASE-9466:
-

[~jdcryans] Yes, I'm proposing a 'read-only mode'(can be per-table or 
per-cluster) rather than 'disabling table', the latter is pretty heavyweight in 
that it needs to offline all regions of the given table.
If we just want to temporarily disable 'update' to a table or the whole cluster 
and later on we want to enable 'update' again, disabling a table or all tables 
of the cluster seems a quite heavy choice.

Thanks for pointing me to the read-only interface of HTableDescriptor of trunk, 
but I don't see any code using it, how is this RO expected to work?

> Read-only mode
> --
>
> Key: HBASE-9466
> URL: https://issues.apache.org/jira/browse/HBASE-9466
> Project: HBase
>  Issue Type: New Feature
>Reporter: Feng Honghua
>Priority: Minor
>
> Can we provide a read-only mode for a table? write to the table in read-only 
> mode will be rejected, but read-only mode is different from disable in that:
> 1. it doesn't offline the regions of the table(hence much more lightweight 
> than disable)
> 2. it can serve read requests
> Comments?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Attachment: test results.jpg
HBASE-9488-trunk.patch

> Improve performance for small scan
> --
>
> Key: HBASE-9488
> URL: https://issues.apache.org/jira/browse/HBASE-9488
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, Performance, Scanners
>Reporter: chunhui shen
>Assignee: chunhui shen
> Attachments: HBASE-9488-trunk.patch, test results.jpg
>
>
> Now, one scan operation would call 3 RPC at least:
> openScanner();
> next();
> closeScanner();
> I think we could reduce the RPC call to one for small scan to get better 
> performance
> Also using pread is better than seek+read for small scan (For this point, see 
> more on HBASE-7266)
> Implements such a small scan as the patch, and take the performance test as 
> following:
> a.Environment:
> one regionserver; 
> one client with 50 concurrent threads;
> KV size:50/100;
> 100% LRU cache hit ratio;
> Random start row of scan
> b.Results:
> See the picture attachment
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9488) Improve performance for small scan

2013-09-09 Thread chunhui shen (JIRA)
chunhui shen created HBASE-9488:
---

 Summary: Improve performance for small scan
 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, test results.jpg

Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9487) create_namespace with property value throws error

2013-09-09 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-9487:
-

Attachment: hbase-9487_v1.patch

One liner. 

> create_namespace with property value throws error
> -
>
> Key: HBASE-9487
> URL: https://issues.apache.org/jira/browse/HBASE-9487
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.96.0
>
> Attachments: hbase-9487_v1.patch
>
>
> Creating a namespace with properties fails from shell: 
> {code}
> hbase(main):002:0> create_namespace 'ns1',{'PROERTY_NAME'=>'PROPERTY_VALUE'}
> ERROR: undefined method `addProperty' for 
> #
> Here is some help for this command:
> Create namespace; pass namespace name,
> and optionally a dictionary of namespace configuration.
> Examples:
> hbase> create_namespace 'ns1'
> hbase> create_namespace 'ns1', {'PROERTY_NAME'=>'PROPERTY_VALUE'}
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9487) create_namespace with property value throws error

2013-09-09 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-9487:


 Summary: create_namespace with property value throws error
 Key: HBASE-9487
 URL: https://issues.apache.org/jira/browse/HBASE-9487
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.96.0
 Attachments: hbase-9487_v1.patch

Creating a namespace with properties fails from shell: 
{code}
hbase(main):002:0> create_namespace 'ns1',{'PROERTY_NAME'=>'PROPERTY_VALUE'}
ERROR: undefined method `addProperty' for 
#

Here is some help for this command:
Create namespace; pass namespace name,
and optionally a dictionary of namespace configuration.
Examples:

hbase> create_namespace 'ns1'
hbase> create_namespace 'ns1', {'PROERTY_NAME'=>'PROPERTY_VALUE'}
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9445) Snapshots should create column family dirs for empty regions

2013-09-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762668#comment-13762668
 ] 

Enis Soztutar commented on HBASE-9445:
--

[~mbertozzi] I am going to commit this if you are ok with it. wdyt? 

> Snapshots should create column family dirs for empty regions
> 
>
> Key: HBASE-9445
> URL: https://issues.apache.org/jira/browse/HBASE-9445
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9445_v1.patch, hbase-9445_v2.patch
>
>
> Currently, taking a snapshot will not create the family directory under a 
> region if the family does not have any files in it. 
> Subsequent verification fails because of this. There is some logic in the 
> SnapshotTestingUtils.confirmSnapshotValid() to deal with empty family 
> directories, but I think we should create the family directories regardless 
> of whether there are any hfiles referencing them. 
> {code}
> 2013-09-05 11:07:21,566 DEBUG [Thread-208] util.FSUtils(1687): |-data/
> 2013-09-05 11:07:21,567 DEBUG [Thread-208] util.FSUtils(1687): |default/
> 2013-09-05 11:07:21,568 DEBUG [Thread-208] util.FSUtils(1687): |---test/
> 2013-09-05 11:07:21,569 DEBUG [Thread-208] util.FSUtils(1687): 
> |--.tabledesc/
> 2013-09-05 11:07:21,570 DEBUG [Thread-208] util.FSUtils(1690): 
> |-.tableinfo.01
> 2013-09-05 11:07:21,570 DEBUG [Thread-208] util.FSUtils(1687): 
> |--.tmp/
> 2013-09-05 11:07:21,571 DEBUG [Thread-208] util.FSUtils(1687): 
> |--accd6e55887057888de758df44dacda7/
> 2013-09-05 11:07:21,572 DEBUG [Thread-208] util.FSUtils(1690): 
> |-.regioninfo
> 2013-09-05 11:07:21,572 DEBUG [Thread-208] util.FSUtils(1687): 
> |-fam/
> 2013-09-05 11:07:21,555 DEBUG [Thread-208] util.FSUtils(1687): 
> |-.hbase-snapshot/
> 2013-09-05 11:07:21,556 DEBUG [Thread-208] util.FSUtils(1687): |.tmp/
> 2013-09-05 11:07:21,557 DEBUG [Thread-208] util.FSUtils(1687): 
> |offlineTableSnapshot/
> 2013-09-05 11:07:21,558 DEBUG [Thread-208] util.FSUtils(1690): 
> |---.snapshotinfo
> 2013-09-05 11:07:21,558 DEBUG [Thread-208] util.FSUtils(1687): 
> |---.tabledesc/
> 2013-09-05 11:07:21,558 DEBUG [Thread-208] util.FSUtils(1690): 
> |--.tableinfo.01
> 2013-09-05 11:07:21,559 DEBUG [Thread-208] util.FSUtils(1687): |---.tmp/
> 2013-09-05 11:07:21,559 DEBUG [Thread-208] util.FSUtils(1687): 
> |---accd6e55887057888de758df44dacda7/
> 2013-09-05 11:07:21,560 DEBUG [Thread-208] util.FSUtils(1690): 
> |--.regioninfo
> {code}
> I think this is important for 0.96.0. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9465) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762665#comment-13762665
 ] 

Feng Honghua commented on HBASE-9465:
-

[~lhofhansl]

For RS failure scenario, can we delay the assigning of recovered regions until 
all the remained hlog files of the failed RS are pushed to peer clusters (the 
hlog split can be parallel with the hlog push though)? This way we can maintain 
the (global) serial push for hlog entries of a region even in face of RS 
failure.

But for region-move it's harder to maintain global serial push since it's 
harder to determine all the hlog entries of a given region has been pushed to 
peer clusters when the containing RS is healthy and continuously receiving 
write requests.

> HLog entries are not pushed to peer clusters serially when region-move or RS 
> failure in master cluster
> --
>
> Key: HBASE-9465
> URL: https://issues.apache.org/jira/browse/HBASE-9465
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Feng Honghua
>
> When region-move or RS failure occurs in master cluster, the hlog entries 
> that are not pushed before region-move or RS-failure will be pushed by 
> original RS(for region move) or another RS which takes over the remained hlog 
> of dead RS(for RS failure), and the new entries for the same region(s) will 
> be pushed by the RS which now serves the region(s), but they push the hlog 
> entries of a same region concurrently without coordination.
> This treatment can possibly lead to data inconsistency between master and 
> peer clusters:
> 1. there are put and then delete written to master cluster
> 2. due to region-move / RS-failure, they are pushed by different 
> replication-source threads to peer cluster
> 3. if delete is pushed to peer cluster before put, and flush and 
> major-compact occurs in peer cluster before put is pushed to peer cluster, 
> the delete is collected and the put remains in peer cluster
> In this scenario, the put remains in peer cluster, but in master cluster the 
> put is masked by the delete, hence data inconsistency between master and peer 
> clusters

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9465) HLog entries are not pushed to peer clusters serially when region-move or RS failure in master cluster

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762658#comment-13762658
 ] 

Feng Honghua commented on HBASE-9465:
-

[~jdcryans] "I have a draft for a new piece of documentation that we could add 
to the ref guide that I should probably contribute" -- where can I read this 
documentation? thanks.

> HLog entries are not pushed to peer clusters serially when region-move or RS 
> failure in master cluster
> --
>
> Key: HBASE-9465
> URL: https://issues.apache.org/jira/browse/HBASE-9465
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Reporter: Feng Honghua
>
> When region-move or RS failure occurs in master cluster, the hlog entries 
> that are not pushed before region-move or RS-failure will be pushed by 
> original RS(for region move) or another RS which takes over the remained hlog 
> of dead RS(for RS failure), and the new entries for the same region(s) will 
> be pushed by the RS which now serves the region(s), but they push the hlog 
> entries of a same region concurrently without coordination.
> This treatment can possibly lead to data inconsistency between master and 
> peer clusters:
> 1. there are put and then delete written to master cluster
> 2. due to region-move / RS-failure, they are pushed by different 
> replication-source threads to peer cluster
> 3. if delete is pushed to peer cluster before put, and flush and 
> major-compact occurs in peer cluster before put is pushed to peer cluster, 
> the delete is collected and the put remains in peer cluster
> In this scenario, the put remains in peer cluster, but in master cluster the 
> put is masked by the delete, hence data inconsistency between master and peer 
> clusters

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Attachment: 9485-v1.txt

Patch v1 makes TableOutputCommitter extend FileOutputCommitter

> TableOutputCommitter should implement recovery if we don't want jobs to start 
> from 0 on RM restart
> --
>
> Key: HBASE-9485
> URL: https://issues.apache.org/jira/browse/HBASE-9485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9485-v1.txt
>
>
> HBase extends OutputCommitter which turns recovery off. Meaning all completed 
> maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
> implements recovery so we should look at that to see what is potentially 
> needed for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Status: Patch Available  (was: Open)

> TableOutputCommitter should implement recovery if we don't want jobs to start 
> from 0 on RM restart
> --
>
> Key: HBASE-9485
> URL: https://issues.apache.org/jira/browse/HBASE-9485
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9485-v1.txt
>
>
> HBase extends OutputCommitter which turns recovery off. Meaning all completed 
> maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
> implements recovery so we should look at that to see what is potentially 
> needed for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9486) NPE in HTable.close() with AsyncProcess

2013-09-09 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-9486:


 Summary: NPE in HTable.close() with AsyncProcess
 Key: HBASE-9486
 URL: https://issues.apache.org/jira/browse/HBASE-9486
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: Enis Soztutar
 Fix For: 0.96.0


When running 
{code}
hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey 
slowDeterministic
{code}
One task failed with the following stack trace:
{code}
2013-09-10 01:56:03,115 WARN [htable-pool1-t134] 
org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 3 ops on 
server02,60020,1378776046122 NOT 
resubmitting.region=IntegrationTestBigLinkedList,\xA6\x10\x9C\x85,1378776439065.766ab62aa30fa94c9014f09738698922.,
 hostname=server02,60020,1378776046122, seqNum=16146143
2013-09-10 01:56:03,115 WARN [htable-pool1-t119] 
org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 6 ops on 
server02,60020,1378775896233 NOT 
resubmitting.region=IntegrationTestBigLinkedList,\x9D\x95>\xDB\xCB\xD5\xE2\xAD\x7F\xCB\x1D\xBCN~\xF2U,1378774537592.b2534e273feecba91db43496efa1cd12.,
 hostname=server02,60020,1378775896233, seqNum=14890994
2013-09-10 01:56:03,655 WARN [htable-pool1-t119] 
org.apache.hadoop.hbase.client.AsyncProcess: Attempt #35/35 failed for 9 ops on 
server01,60020,1378775896233 NOT 
resubmitting.region=IntegrationTestBigLinkedList,\xB8\x0B.\x8C\x12Px\x88>\x10\xA4\x07\x9FJ\x97\xD0,1378775167749.7c0f1c17bc5f02e41e02939187304976.,
 hostname=server01,60020,1378775896233, seqNum=15863492
2013-09-10 01:56:03,818 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.NullPointerException
at 
org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:289)
at 
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:234)
at 
org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:894)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1275)
at org.apache.hadoop.hbase.client.HTable.close(HTable.java:1313)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.cleanup(IntegrationTestBigLinkedList.java:352)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:148)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
{code}

Seems worth investigating. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-09 Thread Ted Yu (JIRA)
Ted Yu created HBASE-9485:
-

 Summary: TableOutputCommitter should implement recovery if we 
don't want jobs to start from 0 on RM restart
 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


HBase extends OutputCommitter which turns recovery off. Meaning all completed 
maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
implements recovery so we should look at that to see what is potentially needed 
for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9484) Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96

2013-09-09 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9484:


Status: Patch Available  (was: Open)

> Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96
> --
>
> Key: HBASE-9484
> URL: https://issues.apache.org/jira/browse/HBASE-9484
> Project: HBase
>  Issue Type: Test
>  Components: mapreduce, test
>Reporter: Nick Dimiduk
> Attachments: 
> 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9484) Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96

2013-09-09 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9484:


Attachment: 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch

patch for 0.96. Includes addendum.

> Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96
> --
>
> Key: HBASE-9484
> URL: https://issues.apache.org/jira/browse/HBASE-9484
> Project: HBase
>  Issue Type: Test
>  Components: mapreduce, test
>Reporter: Nick Dimiduk
> Attachments: 
> 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5954) Allow proper fsync support for HBase

2013-09-09 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762652#comment-13762652
 ] 

haosdent commented on HBASE-5954:
-

[~lhofhansl]Haha, I have test hsync() in RAID10 before. A hsync() call would 
spent 4ms. Because the data are written to RAID card cache, it is very fast.

> Allow proper fsync support for HBase
> 
>
> Key: HBASE-5954
> URL: https://issues.apache.org/jira/browse/HBASE-5954
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 0.98.0
>
> Attachments: 5954-trunk-hdfs-trunk.txt, 5954-trunk-hdfs-trunk-v2.txt, 
> 5954-trunk-hdfs-trunk-v3.txt, 5954-trunk-hdfs-trunk-v4.txt, 
> 5954-trunk-hdfs-trunk-v5.txt, 5954-trunk-hdfs-trunk-v6.txt, hbase-hdfs-744.txt
>
>
> At least get recommendation into 0.96 doc and some numbers running w/ this 
> hdfs feature enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9484) Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.96

2013-09-09 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-9484:
---

 Summary: Backport 8534 "Fix coverage for 
org.apache.hadoop.hbase.mapreduce" to 0.96
 Key: HBASE-9484
 URL: https://issues.apache.org/jira/browse/HBASE-9484
 Project: HBase
  Issue Type: Test
  Components: mapreduce, test
Reporter: Nick Dimiduk




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9459) Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.94

2013-09-09 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9459:


Summary: Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" 
to 0.94  (was: Backport 8534 "Fix coverage for 
org.apache.hadoop.hbase.mapreduce")

> Backport 8534 "Fix coverage for org.apache.hadoop.hbase.mapreduce" to 0.94
> --
>
> Key: HBASE-9459
> URL: https://issues.apache.org/jira/browse/HBASE-9459
> Project: HBase
>  Issue Type: Test
>  Components: mapreduce, test
>Reporter: Nick Dimiduk
>Assignee: Aleksey Gorshkov
> Fix For: 0.94.13
>
>
> Do you want this test update backported? See HBASE-8534 for a 0.94 patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8534) Fix coverage for org.apache.hadoop.hbase.mapreduce

2013-09-09 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762651#comment-13762651
 ] 

Nick Dimiduk commented on HBASE-8534:
-

Created HBASE-9484 for 0.96 backport.

> Fix coverage for org.apache.hadoop.hbase.mapreduce
> --
>
> Key: HBASE-8534
> URL: https://issues.apache.org/jira/browse/HBASE-8534
> Project: HBase
>  Issue Type: Test
>  Components: mapreduce, test
>Affects Versions: 0.94.8, 0.95.2
>Reporter: Aleksey Gorshkov
>Assignee: Aleksey Gorshkov
> Fix For: 0.98.0
>
> Attachments: 0001-HBASE-8534-hadoop2-addendum.patch, 
> 8534-trunk-h.patch, HBASE-8534-0.94-d.patch, HBASE-8534-0.94-e.patch, 
> HBASE-8534-0.94-f.patch, HBASE-8534-0.94-g.patch, HBASE-8534-0.94.patch, 
> HBASE-8534-trunk-a.patch, HBASE-8534-trunk-b.patch, HBASE-8534-trunk-c.patch, 
> HBASE-8534-trunk-d.patch, HBASE-8534-trunk-e.patch, HBASE-8534-trunk-f.patch, 
> HBASE-8534-trunk-g.patch, HBASE-8534-trunk.patch
>
>
> fix coverage org.apache.hadoop.hbase.mapreduce
> patch HBASE-8534-0.94.patch for branch-0.94
> patch HBASE-8534-trunk.patch for branch-0.95 and trunk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9245) Remove dead or deprecated code from hbase 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762647#comment-13762647
 ] 

Sergey Shelukhin commented on HBASE-9245:
-

Should there be a sub-JIRA to brainstorm how people would build against both 94 
and 96? Would projects like Hive, Mahout, etc. need HBase shim similar to 
Hadoop?

> Remove dead or deprecated code from hbase 0.96
> --
>
> Key: HBASE-9245
> URL: https://issues.apache.org/jira/browse/HBASE-9245
> Project: HBase
>  Issue Type: Bug
>Reporter: Jonathan Hsieh
>
> This is an umbrella issue that will cover the removal or refactoring of 
> dangling dead code and cruft.  Some can make it into 0.96, some may have to 
> wait for an 0.98.  The "great culling" of code will be grouped patches that 
> are logically related.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762644#comment-13762644
 ] 

Sergey Shelukhin commented on HBASE-9477:
-

We do, but other people don't enforce them. They are not aimed at us but at 
other components, and we cannot force them to look or to use our tools :)

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9464) master failure during region-move can result in the region moved to a different RS rather than the destination one user specified

2013-09-09 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762642#comment-13762642
 ] 

Feng Honghua commented on HBASE-9464:
-

>From the perspective of user who issues the region-move request, the region is 
>moved to a different RS from the one he specified and the destination RS he 
>specifies is healthy.

The root cause is the RegionPlan containing the destination RS info is just 
kept in master's memory without persistence, so the new active master doesn't 
know this info when take over the active master role.

> master failure during region-move can result in the region moved to a 
> different RS rather than the destination one user specified
> -
>
> Key: HBASE-9464
> URL: https://issues.apache.org/jira/browse/HBASE-9464
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Feng Honghua
>Priority: Minor
>
> 1. user issues region-move by specifying a destination RS
> 2. master finishes offlining the region
> 3. master fails before assigning it the the specified destination RS
> 4. new master assigns the region to a random RS since it doesn't have 
> destination RS info

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762643#comment-13762643
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

bq. They are non-enforceable in a sense that they don't prevent you from using 
something, and aren't even discoverable when you are adding an import in 
Eclipse or vim... 

That's why we review code before committing -- we enforce this.  Also, since it 
is a java annotation we can write tools enforce this.  See the patch here for a 
good start HBASE-8277



> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762640#comment-13762640
 ] 

Hadoop QA commented on HBASE-8930:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602247/HBASE-8930-rev6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.replication.TestReplicationChangingPeerRegionservers
  org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7107//console

This message is automatically generated.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColu

[jira] [Commented] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session

2013-09-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762634#comment-13762634
 ] 

Enis Soztutar commented on HBASE-9468:
--

I think we should remove the recovering logic in the master from a zk session 
expiration. Failing the master, and letting the backup masters take over and/or 
restart by the admin or supervisor is simpler and more bullet proof. The 
masterRecovery parameter in HMaster#finishInitialization() complicates things. 

> Previous active master can still serves RPC request when it is trying 
> recovering expired zk session
> ---
>
> Key: HBASE-9468
> URL: https://issues.apache.org/jira/browse/HBASE-9468
> Project: HBase
>  Issue Type: Bug
>Reporter: Feng Honghua
>
> When the active master's zk session expires, it'll try to recover zk session, 
> but without turn off its RpcServer. What if a previous backup master has 
> already become the now active master, and some client tries to send request 
> to this expired master by using the cached master info? Any problem here?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762638#comment-13762638
 ] 

Sergey Shelukhin commented on HBASE-9477:
-

They are non-enforceable in a sense that they don't prevent you from using 
something, and aren't even discoverable when you are adding an import in 
Eclipse or vim... Then, problems only arise when things are broken by new HBase 
version after having worked for a while, at which point saying "Well we told 
you so! [with annotations]" is not very productive. I'd say making non-public 
things non-public is a better option. That way, the only way to get at them is 
to use tricks explicitly, and I'm ok with breaking that :)
Anyhow, it's out of the scope of this jira

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762627#comment-13762627
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

bq. Btw, I don't think annotations are that helpful, people don't really read 
them, if it's public it's public seems to be the prevalent mode of thinking 

That is exactly why we need to cull these and enforce these going forward. 
There are many applications and platform being build on top of HBase these days 
fixing names/interfaces/interfacevisiblity today, though painful, will suck 
less than waiting to do it next time around.

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762624#comment-13762624
 ] 

Sergey Shelukhin commented on HBASE-9477:
-

bq.  broken parts with for apis that didn't make it to the Cell interface. 
Yeah, that sucks...

Btw, I don't think annotations are that helpful, people don't really read them, 
if it's public it's public seems to be the prevalent mode of thinking :)

raw/list breaks more stuff for Hive, in places where it actually reads KVs and 
writes KVs out in HBase handler.
I figure we are ok for 96 at least, because backward compat methods will cast 
to KV, so there'd be no perf penalty.

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762625#comment-13762625
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

Also, the Cell's getValueArray, getFamilyArray, getRowArray, and getQualfier 
would be better named if they were called get*Base or get*Pointer.

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762623#comment-13762623
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

I'm actually thinking about removing listCells() and renaming raw() to 
getCells().  I think this is better for naming (getters clearly just get a 
reference) and the listCells method is a convenience method that will have to 
be supported forever more.

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762621#comment-13762621
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

This is the time to make these kinds of calls.  

bq. Rename Cell interface to KeyValue with current Cell interface + "old" 
KeyValue methods that are not part of Cell, the latter being deprecated, and 
being implemented by KV.
That way people's code will work without rebuilding vs both 94 and 96, and they 
won't need shims.

This is possibly true if with a recompile but there will still be broken parts 
with for apis that didn't make it to the Cell interface.  Recompile is 
necessary because we are converting a class to an interface (there was a 
similar problem with one of the hadoop2 job classes).

I do think that changing Cell to fit in where KeyValue used to be confusing and 
opaque.  Something similar was done when hbase went onto hadoop2/mr2 and at 
least by having new names it is easier to tell where we are and where we were.  

Previous to 0.96/0.95 we haven't had clear markers on what was 
InterfaceVisiblity.Public and InterfaceStability.Evolving. Before we release an 
0.96 I'd like to go through common and client with a fine tooth comb and 
@deprecate / Privatize more to make it simpler for the future upgrades.

bq. We should see whether this will cover the changes required for flume, ycsb 
and hive.

I've looked at flume, hive, and impala will look at ycsb (which one are you 
guys using? link?). 
* Flume is Put-centric, and seems unaffected by these changes since it only 
really affects Get Results.  
* [~brocknoland] in Hive-land has a patch that already takes into account the 
current hbase changes.
* Impala needs to be updated to use the more public api (it goes under the 
covers one layer too deep).

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9456) Meta doesn't get assigned in a master failure scenario

2013-09-09 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9456:
---

Status: Patch Available  (was: Open)

Thanks for reviewing. Passing the patch through hadoopqa.

> Meta doesn't get assigned in a master failure scenario
> --
>
> Key: HBASE-9456
> URL: https://issues.apache.org/jira/browse/HBASE-9456
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: 9456-1.txt, 9456-2.txt
>
>
> The flow:
> 1. Cluster is up, meta is assigned to some server
> 2. Master is killed
> 3. Master is brought up, it is initializing. It learns about the Meta server 
> (in assignMeta).
> 4. Server holding meta is killed
> 5. Meta never gets reassigned since the SSH wasn't enabled

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9456) Meta doesn't get assigned in a master failure scenario

2013-09-09 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762614#comment-13762614
 ] 

Jimmy Xiang commented on HBASE-9456:


+1.  I can clean up related part a little in HBASE-9457.

> Meta doesn't get assigned in a master failure scenario
> --
>
> Key: HBASE-9456
> URL: https://issues.apache.org/jira/browse/HBASE-9456
> Project: HBase
>  Issue Type: Bug
>Reporter: Devaraj Das
>Priority: Critical
> Fix For: 0.96.0
>
> Attachments: 9456-1.txt, 9456-2.txt
>
>
> The flow:
> 1. Cluster is up, meta is assigned to some server
> 2. Master is killed
> 3. Master is brought up, it is initializing. It learns about the Meta server 
> (in assignMeta).
> 4. Server holding meta is killed
> 5. Meta never gets reassigned since the SSH wasn't enabled

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762607#comment-13762607
 ] 

Sergey Shelukhin commented on HBASE-9477:
-

[~stack] what do you think about the latest comment

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762605#comment-13762605
 ] 

Sergey Shelukhin commented on HBASE-9477:
-

I have an interesting idea, but probably unfeasible at this point.
Rename old KeyValue to something else in 95/96.
Rename Cell interface to KeyValue with current Cell interface + "old" KeyValue 
methods that are not part of Cell, the latter being deprecated, and being 
implemented by KV.
That way people's code will work without rebuilding vs both 94 and 96, and they 
won't need shims.
When we add new implementations of ex-Cell-now-KeyValue in 0.98-1.0-..., we'd 
need to either make horribly inefficient implementations, or remove the 
methods, or make them throw, so people would need to get rid of them or drop 
0.94 support.





> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762606#comment-13762606
 ] 

Enis Soztutar commented on HBASE-9477:
--

bq. The deprecated methods would only be applied to 0.96 (not 0.98)
If 0.98 will come just after 0.96, I propose we make also add this patch to 
0.98, and remove them in the one after 0.98. 
I think this is great. We should see whether this will cover the changes 
required for flume, ycsb and hive.

> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-09 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762598#comment-13762598
 ] 

Sergey Shelukhin commented on HBASE-9477:
-

patch looks good. list() doesn't have to do two steps (convert array and then 
do as list) but I guess it should be ok


> Add deprecation compat shim for Result#raw and Result#list for 0.96
> ---
>
> Key: HBASE-9477
> URL: https://issues.apache.org/jira/browse/HBASE-9477
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9477.patch
>
>
> Discussion in HBASE-9359 brought up that applications commonly use the 
> Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
> version to something like #listCells and #rawCells and revert #raw and #list 
> to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762580#comment-13762580
 ] 

Hadoop QA commented on HBASE-9481:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602237/hbase-9481.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7106//console

This message is automatically generated.

> Servershutdown handler get aborted with ConcurrentModificationException
> ---
>
> Key: HBASE-9481
> URL: https://issues.apache.org/jira/browse/HBASE-9481
> Project: HBase
>  Issue Type: Bug
>  Components: MTTR
>Affects Versions: 0.96.0
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9481.patch
>
>
> In integration tests, we found SSH got aborted with following stack trace:
> {code}
> 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
> processing event M_SERVER_SHUTDOWN
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
> at java.util.HashMap$ValueIterator.next(HashMap.java:822)
> at 
> org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
> at 
> org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
> at 
> org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762574#comment-13762574
 ] 

Hudson commented on HBASE-8930:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-8930 REVERT due to test issues (larsh: rev 1521217)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Fix For: 0.98.0, 0.94.13, 0.96.1
>
> Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
> 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
> 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
> 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
> 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
> HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
> HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new Qualif

[jira] [Commented] (HBASE-9453) make dev-support/generate-hadoopX-poms.sh have exec perms.

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762558#comment-13762558
 ] 

Hudson commented on HBASE-9453:
---

SUCCESS: Integrated in HBase-TRUNK #4482 (See 
[https://builds.apache.org/job/HBase-TRUNK/4482/])
HBASE-9453 make dev-support/generate-hadoopX-poms.sh have exec perms (jmhsieh: 
rev 1521285)
* /hbase/trunk/dev-support/generate-hadoopX-poms.sh


> make dev-support/generate-hadoopX-poms.sh have exec perms.
> --
>
> Key: HBASE-9453
> URL: https://issues.apache.org/jira/browse/HBASE-9453
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9453.patch
>
>
> currently: 
> {code}
> jon@swoop:~/proj/hbase-trunk$ ls -la dev-support/generate-hadoopX-poms.sh 
> -rw-r--r-- 1 jon jon 5216 2013-09-06 10:45 
> dev-support/generate-hadoopX-poms.sh
> {code}
> after patch:
> {code}
> jon@swoop:~/proj/hbase-trunk$ ls -la dev-support/generate-hadoopX-poms.sh 
> -rwxr-xr-x 1 jon jon 5216 2013-08-07 18:05 
> dev-support/generate-hadoopX-poms.sh
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9449) document how to use shell enhancements from HBASE-5548

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762562#comment-13762562
 ] 

Hudson commented on HBASE-9449:
---

SUCCESS: Integrated in HBase-TRUNK #4482 (See 
[https://builds.apache.org/job/HBase-TRUNK/4482/])
HBASE-9449 document how to use shell enhancements from HBASE-5548 (jmhsieh: rev 
1521288)
* /hbase/trunk/src/main/docbkx/shell.xml


> document how to use shell enhancements from HBASE-5548
> --
>
> Key: HBASE-9449
> URL: https://issues.apache.org/jira/browse/HBASE-9449
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9449.patch, shell.html
>
>
> HBASE-5548 introduced new behavior for shell commands like 'list' to make 
> them act more ruby-like.  There is no documentation for this in the refguide. 
> We should 
> 1) have an example in the shell sectoin
> 2) document the that new '=> #xxx.x..' line is expected
> We can probably lift a bunch of docs from HBASE-5548.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5548) Add ability to get a table in the shell

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762560#comment-13762560
 ] 

Hudson commented on HBASE-5548:
---

SUCCESS: Integrated in HBase-TRUNK #4482 (See 
[https://builds.apache.org/job/HBase-TRUNK/4482/])
HBASE-9449 document how to use shell enhancements from HBASE-5548 (jmhsieh: rev 
1521288)
* /hbase/trunk/src/main/docbkx/shell.xml


> Add ability to get a table in the shell
> ---
>
> Key: HBASE-5548
> URL: https://issues.apache.org/jira/browse/HBASE-5548
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.95.0
>
> Attachments: ruby_HBASE-5528-v0.patch, 
> ruby_HBASE-5548-addendum.patch, ruby_HBASE-5548-v1.patch, 
> ruby_HBASE-5548-v2.patch, ruby_HBASE-5548-v3.patch, ruby_HBASE-5548-v5.patch
>
>
> Currently, all the commands that operate on a table in the shell first have 
> to take the table as name as input. 
> There are two main considerations:
> * It is annoying to have to write the table name every time, when you should 
> just be able to get a reference to a table
> * the current implementation is very wasteful - it creates a new HTable for 
> each call (but reuses the connection since it uses the same configuration)
> We should be able to get a handle to a single HTable and then operate on that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9453) make dev-support/generate-hadoopX-poms.sh have exec perms.

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762568#comment-13762568
 ] 

Hudson commented on HBASE-9453:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9453 make dev-support/generate-hadoopX-poms.sh have exec perms (jmhsieh: 
rev 1521285)
* /hbase/trunk/dev-support/generate-hadoopX-poms.sh


> make dev-support/generate-hadoopX-poms.sh have exec perms.
> --
>
> Key: HBASE-9453
> URL: https://issues.apache.org/jira/browse/HBASE-9453
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Minor
> Fix For: 0.98.0, 0.96.1
>
> Attachments: hbase-9453.patch
>
>
> currently: 
> {code}
> jon@swoop:~/proj/hbase-trunk$ ls -la dev-support/generate-hadoopX-poms.sh 
> -rw-r--r-- 1 jon jon 5216 2013-09-06 10:45 
> dev-support/generate-hadoopX-poms.sh
> {code}
> after patch:
> {code}
> jon@swoop:~/proj/hbase-trunk$ ls -la dev-support/generate-hadoopX-poms.sh 
> -rwxr-xr-x 1 jon jon 5216 2013-08-07 18:05 
> dev-support/generate-hadoopX-poms.sh
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9449) document how to use shell enhancements from HBASE-5548

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762573#comment-13762573
 ] 

Hudson commented on HBASE-9449:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9449 document how to use shell enhancements from HBASE-5548 (jmhsieh: rev 
1521288)
* /hbase/trunk/src/main/docbkx/shell.xml


> document how to use shell enhancements from HBASE-5548
> --
>
> Key: HBASE-9449
> URL: https://issues.apache.org/jira/browse/HBASE-9449
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9449.patch, shell.html
>
>
> HBASE-5548 introduced new behavior for shell commands like 'list' to make 
> them act more ruby-like.  There is no documentation for this in the refguide. 
> We should 
> 1) have an example in the shell sectoin
> 2) document the that new '=> #xxx.x..' line is expected
> We can probably lift a bunch of docs from HBASE-5548.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5548) Add ability to get a table in the shell

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762571#comment-13762571
 ] 

Hudson commented on HBASE-5548:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9449 document how to use shell enhancements from HBASE-5548 (jmhsieh: rev 
1521288)
* /hbase/trunk/src/main/docbkx/shell.xml


> Add ability to get a table in the shell
> ---
>
> Key: HBASE-5548
> URL: https://issues.apache.org/jira/browse/HBASE-5548
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 0.95.0
>
> Attachments: ruby_HBASE-5528-v0.patch, 
> ruby_HBASE-5548-addendum.patch, ruby_HBASE-5548-v1.patch, 
> ruby_HBASE-5548-v2.patch, ruby_HBASE-5548-v3.patch, ruby_HBASE-5548-v5.patch
>
>
> Currently, all the commands that operate on a table in the shell first have 
> to take the table as name as input. 
> There are two main considerations:
> * It is annoying to have to write the table name every time, when you should 
> just be able to get a reference to a table
> * the current implementation is very wasteful - it creates a new HTable for 
> each call (but reuses the connection since it uses the same configuration)
> We should be able to get a handle to a single HTable and then operate on that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9436) hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762566#comment-13762566
 ] 

Hudson commented on HBASE-9436:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9436 hbase.regionserver.handler.count default (nkeywal: rev 1521166)
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* /hbase/trunk/hbase-server/src/test/resources/hbase-site.xml
* /hbase/trunk/src/main/docbkx/configuration.xml


> hbase.regionserver.handler.count default: 5, 10, 25, 30? pick one
> -
>
> Key: HBASE-9436
> URL: https://issues.apache.org/jira/browse/HBASE-9436
> Project: HBase
>  Issue Type: Bug
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9436.v1.patch, 9436.v2.patch
>
>
> Below what we have today.
> I vote for 10.
> configuration.xml
> The default of 10 is rather low
> common/hbase-site
> hbase.regionserver.handler.count
> 30
> server/hbase-site
> 5
> Count of RPC Server instances spun up on RegionServers
> Same property is used by the HMaster for count of master handlers.
> Default is 10. <===
> HMaster.java
> int numHandlers = conf.getInt("hbase.master.handler.count",
>   conf.getInt("hbase.regionserver.handler.count", 25));
> HRegionServer.java
> hbase.regionserver.handler.count: 
> conf.getInt("hbase.regionserver.handler.count", 10),

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9478) Make Cell @interfaceAudience.public and evolving.

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762561#comment-13762561
 ] 

Hudson commented on HBASE-9478:
---

SUCCESS: Integrated in HBase-TRUNK #4482 (See 
[https://builds.apache.org/job/HBase-TRUNK/4482/])
HBASE-9478 Make Cell @InterfaceAudience.Public and Evolving (jmhsieh: rev 
1521305)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Cell.java


> Make Cell @interfaceAudience.public and evolving.
> -
>
> Key: HBASE-9478
> URL: https://issues.apache.org/jira/browse/HBASE-9478
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9478.patch
>
>
> From discussion in HBASE-9359, KeyValue was made @InterfaceAudience.Private.  
> Cell was not made @InterfaceAudience.Public.  Fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9301) Default hbase.dynamic.jars.dir to hbase.rootdir/jars

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762567#comment-13762567
 ] 

Hudson commented on HBASE-9301:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9301 Default hbase.dynamic.jars.dir to hbase.rootdir/jars (Vasu Mariyala) 
(larsh: rev 1521227)
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java


> Default hbase.dynamic.jars.dir to hbase.rootdir/jars
> 
>
> Key: HBASE-9301
> URL: https://issues.apache.org/jira/browse/HBASE-9301
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2, 0.94.12, 0.96.0
>Reporter: James Taylor
>Assignee: Vasu Mariyala
> Fix For: 0.98.0, 0.94.12, 0.96.1
>
> Attachments: 0.94-HBASE-9301.patch, 0.94-HBASE-9301-rev1.patch, 
> HBASE-9301.patch, HBASE-9301-rev1.patch, HBASE-9301-rev2.patch
>
>
> A reasonable default for hbase.dynamic.jars.dir would be hbase.rootdir/jars 
> so that folks aren't forced to edit their hbase-sites.xml to take advantage 
> of the new, cool feature to load coprocessor/custom filter jars out of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9475) Fix pom warnings found by new m2eclipse

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762569#comment-13762569
 ] 

Hudson commented on HBASE-9475:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9475 Fix pom warnings found by new m2eclipse (stack: rev 1521255)
* /hbase/trunk/hbase-common/pom.xml
* /hbase/trunk/hbase-server/pom.xml


> Fix pom warnings found by new m2eclipse
> ---
>
> Key: HBASE-9475
> URL: https://issues.apache.org/jira/browse/HBASE-9475
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: stack
>Assignee: stack
> Fix For: 0.98.0, 0.96.0
>
> Attachments: pom.txt
>
>
> Remove explicit versions in subpoms... could mess us up going forward.  Found 
> by new m2eclipse plugin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9478) Make Cell @interfaceAudience.public and evolving.

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762572#comment-13762572
 ] 

Hudson commented on HBASE-9478:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #719 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/719/])
HBASE-9478 Make Cell @InterfaceAudience.Public and Evolving (jmhsieh: rev 
1521305)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/Cell.java


> Make Cell @interfaceAudience.public and evolving.
> -
>
> Key: HBASE-9478
> URL: https://issues.apache.org/jira/browse/HBASE-9478
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Fix For: 0.98.0, 0.96.0
>
> Attachments: hbase-9478.patch
>
>
> From discussion in HBASE-9359, KeyValue was made @InterfaceAudience.Private.  
> Cell was not made @InterfaceAudience.Public.  Fix this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   4   5   >