[jira] [Commented] (HBASE-10774) Restore TestMultiTableInputFormat

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947590#comment-13947590
 ] 

Hudson commented on HBASE-10774:


SUCCESS: Integrated in HBase-TRUNK #5041 (See 
[https://builds.apache.org/job/HBase-TRUNK/5041/])
HBASE-10774 Restore TestMultiTableInputFormat (Liu Shaohui) (liangxie: rev 
1581661)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java


 Restore TestMultiTableInputFormat
 -

 Key: HBASE-10774
 URL: https://issues.apache.org/jira/browse/HBASE-10774
 Project: HBase
  Issue Type: Test
Affects Versions: 0.99.0
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10774-trunk-v2.diff, HBASE-10774-trunk-v2.patch, 
 HBASE-10774-v1.diff


 TestMultiTableInputFormat is removed in HBASE-9009 for this test made the ci 
 failed. But in HBASE-10692 we need to add a new test 
 TestSecureMultiTableInputFormat which is depends on it. So we try to restore 
 it in this issue.
 I rerun the test for several times and it passed.
 {code}
 Running org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat
 Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 314.163 sec
 {code}
 [~stack]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10836) Filters failing to compare negative numbers (int,float,double or long)

2014-03-26 Thread Chaitanya Kumar (JIRA)
Chaitanya Kumar created HBASE-10836:
---

 Summary: Filters failing to compare negative numbers 
(int,float,double or long)
 Key: HBASE-10836
 URL: https://issues.apache.org/jira/browse/HBASE-10836
 Project: HBase
  Issue Type: Brainstorming
  Components: Filters
Affects Versions: 0.94.2
 Environment: Psuedo-mode
Reporter: Chaitanya Kumar


I have come across an issue while using filters to get a result.

For eg.
I have created a table and its specifications are as follows :

table name -- test
column family -- cf
row keys -- rowKey1 - rowKey10 (10 different row keys)
column qualifier -- integerData

For different rowkeys, the qualifier 'integerData' contains either positive or 
negative integer values (data loaded randomly).

Now, while I am trying to retrieve the data from the table based on a filter 
condition, its failing to give the desired result.

For eg. say,
My table contains following data :
[-50,-40,-30,-20,-10,10,20,30,40,50]

I want to get only those values which are greater than or equal to 40.
Following is the code for the filter set on scan :


  Scan scan = new Scan();
  scan.addColumn(Bytes.toBytes(cf), Bytes.toBytes( integerData));
   
  int i = 40;
  Filter filter = new ValueFilter(CompareOp.GREATER_OR_EQUAL,new 
BinaryComparator(Bytes.toBytes(i)));

  scan.setFilter(filter);
*

The result should be : 40 and 50
BUT, the actual result is : -50, -40, -30, -20, -10, 40, 50

I have read few posts which addressed this issue, and few people provided the 
solution as:

1) write a custom comparator, as BinaryComparator is not meant for number 
comparison
OR
2) retrieve all the values as integer and then compare

BUT, I want to know if there is any other way to achieve this.
Because this seems to be a very basic need, i.e. comparing numbers, and I feel 
HBase should have something straight forward to deal with this.
This comparison fails only when the negative numbers are involved.

I am not able to get the right way to do it.

My Hbase version is 0.94.2 and I am running it in pseudo mode.

Can anyone help me on this ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-7759) Fix test failure for HADOOP-9252

2014-03-26 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HBASE-7759.


Resolution: Not A Problem

It seems that this is not a problem.

 Fix test failure for HADOOP-9252
 

 Key: HBASE-7759
 URL: https://issues.apache.org/jira/browse/HBASE-7759
 Project: HBase
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: 7759.files


 HADOOP-9252 slightly changes the format of some StringUtils outputs.  It may 
 cause test failures.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10837) Filters failing to compare negative numbers (int,float,double or long)

2014-03-26 Thread Chandraprakash Sahu (JIRA)
Chandraprakash Sahu created HBASE-10837:
---

 Summary: Filters failing to compare negative numbers 
(int,float,double or long) 
 Key: HBASE-10837
 URL: https://issues.apache.org/jira/browse/HBASE-10837
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.2
 Environment: Pseudo Mode
Reporter: Chandraprakash Sahu
Priority: Blocker


 I have come across an issue while using filters to get a result.

For eg.
I have created a table and its specifications are as follows :

table name -- test
column family -- cf
row keys -- rowKey1 - rowKey10 (10 different row keys)
column qualifier -- integerData

For different rowkeys, the qualifier 'integerData' contains either positive or 
negative integer values (data loaded randomly).

Now, while I am trying to retrieve the data from the table based on a filter 
condition, its failing to give the desired result.

For eg. say,
My table contains following data :
[-50,-40,-30,-20,-10,10,20,30,40,50]

I want to get only those values which are greater than or equal to 40.
Following is the code for the filter set on scan :


  Scan scan = new Scan();
  scan.addColumn(Bytes.toBytes(cf), Bytes.toBytes( integerData));
   
  int i = 40;
  Filter filter = new ValueFilter(CompareOp.GREATER_OR_EQUAL,new 
BinaryComparator(Bytes.toBytes(i)));

  scan.setFilter(filter);
*

The result should be : 40 and 50
BUT, the actual result is : -50, -40, -30, -20, -10, 40, 50

I have read few posts which addressed this issue, and few people provided the 
solution as:

1) write a custom comparator, as BinaryComparator is not meant for number 
comparison
OR
2) retrieve all the values as integer and then compare

BUT, I want to know if there is any other way to achieve this.
Because this seems to be a very basic need, i.e. comparing numbers, and I feel 
HBase should have something straight forward to deal with this.
This comparison fails only when the negative numbers are involved.

I am not able to get the right way to do it.

My Hbase version is 0.94.2 and I am running it in pseudo mode.

Can anyone help me on this ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10692) The Multi TableMap job don't support the security HBase cluster

2014-03-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947685#comment-13947685
 ] 

Liu Shaohui commented on HBASE-10692:
-

[~apurtell]
Thanks for your advice. 
But i found some conflicts in the code. 
RpcServer line #1455
{code}
  if (isSecurityEnabled  authMethod == AuthMethod.SIMPLE) {
AccessControlException ae = new 
AccessControlException(Authentication is required);
setupResponse(authFailedResponse, authFailedCall, ae, 
ae.getMessage());
responder.doRespond(authFailedCall);
throw ae;
  }
{code}
It seems that authMethod can not be SIMPLE when security is enabled


 The Multi TableMap job don't support the security HBase cluster
 ---

 Key: HBASE-10692
 URL: https://issues.apache.org/jira/browse/HBASE-10692
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-10692-0.94-v1.diff, HBASE-10692-trunk-v1.diff, 
 HBASE-10692-trunk-v2.diff


 HBASE-3996 adds the support of multiple tables and scanners as input to the 
 mapper in map/reduce jobs. But it don't support the security HBase cluster.
 [~erank] [~bbaugher]
 Ps: HBASE-3996 only support multiple tables from the same HBase cluster. 
 Should we support multiple tables from the different clusters?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10692) The Multi TableMap job don't support the security HBase cluster

2014-03-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947689#comment-13947689
 ] 

Liu Shaohui commented on HBASE-10692:
-

[~apurtell]
Please figure out if I am wrong. I am not very familiar with security component.

 The Multi TableMap job don't support the security HBase cluster
 ---

 Key: HBASE-10692
 URL: https://issues.apache.org/jira/browse/HBASE-10692
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-10692-0.94-v1.diff, HBASE-10692-trunk-v1.diff, 
 HBASE-10692-trunk-v2.diff


 HBASE-3996 adds the support of multiple tables and scanners as input to the 
 mapper in map/reduce jobs. But it don't support the security HBase cluster.
 [~erank] [~bbaugher]
 Ps: HBASE-3996 only support multiple tables from the same HBase cluster. 
 Should we support multiple tables from the different clusters?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947760#comment-13947760
 ] 

Anoop Sam John commented on HBASE-10771:


Also getBytes() call on an offheap impl should throw Exception rather than 
copying bytes to onheap and returning. So having an API like hasBytes() will be 
useful IMHO

 Primitive type put/get APIs in ByteRange 
 -

 Key: HBASE-10771
 URL: https://issues.apache.org/jira/browse/HBASE-10771
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10771.patch, HBASE-10771_V2.patch


 While doing HBASE-10713 I came across the need to write int/long (and read 
 also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
 such APIs.
 Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
 under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947762#comment-13947762
 ] 

ramkrishna.s.vasudevan commented on HBASE-10831:


I tried this way
{code}
./hbase --config /usr/lib/hbase/conf/ 
org.apache.hadoop.hbase.IntegrationTestsDriver -r 
.*IntegrationTestIngestWithACL.*
{code}
this takes the hardcoded values in IntegrationTestIngestWithACL for the users 
and superuser.
{code}
./hbase --config /usr/lib/hbase/conf/ 
org.apache.hadoop.hbase.IntegrationTestIngestWithACL -superuser owner -userlist 
user1,user2
{code}
Running this way would take params for user and super user. Let me check that 
mvn verify type of running.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.2


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10771) Primitive type put/get APIs in ByteRange

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947764#comment-13947764
 ] 

ramkrishna.s.vasudevan commented on HBASE-10771:


Yes we need in my opinion.  In the patch that i am using i created isDirect() 
that would say if it is on heap or offheap.  hasArray() also is fine. Some of 
the flows decision making may be based on this.

 Primitive type put/get APIs in ByteRange 
 -

 Key: HBASE-10771
 URL: https://issues.apache.org/jira/browse/HBASE-10771
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10771.patch, HBASE-10771_V2.patch


 While doing HBASE-10713 I came across the need to write int/long (and read 
 also) from a ByteRange.  CellBlocks are backed by ByteRange. So we can add 
 such APIs.
 Also as per HBASE-10750  we return a ByteRange from MSLAB and also discussion 
 under HBASE-10191 suggest we can have BR backed HFileBlocks etc.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10835) DBE encode path improvements

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947771#comment-13947771
 ] 

ramkrishna.s.vasudevan commented on HBASE-10835:


As said in HBASE-10801 was planning to change the interface of DBEs and address 
usage of cell.  Are  you going to change the code without cells? Intention of 
HBASE-10801 was mainly changing the interface and then doing any improvements 
on that. (and while doing that I thought the related code also would change may 
be ending up as what this JIRA suggests). This also includes the read path.

 DBE encode path improvements
 

 Key: HBASE-10835
 URL: https://issues.apache.org/jira/browse/HBASE-10835
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0


 Here 1st we write KVs (Cells) into a buffer and then passed to DBE encoder. 
 Encoder again reads kvs one by one from the buffer and encodes and creates a 
 new buffer.
 There is no need to have this model now. Previously we had option of no 
 encode in disk and encode only in cache. At that time the read buffer from a 
 HFile block was passed to this and encodes.
 So encode cell by cell can be done now. Making this change will need us to 
 have a NoOp DBE impl which just do the write of a cell as it is with out any 
 encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10827) Making HBase use multiple ethernet cards will improve the performance

2014-03-26 Thread zhaojianbo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaojianbo reassigned HBASE-10827:
--

Assignee: zhaojianbo

 Making HBase use multiple ethernet cards will improve the performance
 -

 Key: HBASE-10827
 URL: https://issues.apache.org/jira/browse/HBASE-10827
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.99.0
Reporter: zhaojianbo
Assignee: zhaojianbo
 Attachments: HBASE-10827-0.98-branch.patch


 In our online cluster, usually there are multiple ethernet cards in one 
 machine, one for outer network, one for inner network. But the current 
 version of HBase can not use all of them which waste the network bandwidth of 
 one ethernet card. If we make HBase use multiple ethernet cards concurrently, 
 the performance of HBase will be improved.
 So I did the work, and test a simple scenario:
 8 clients scan the same region data from a different machine with two 
 ethernet cards.(machine of regionserver also with two ethernet cards)
 The Environment is:
 * I start HBase cluster with a master, a regionserver, a zookeeper in a 
 machine.
 * HDFS cluster with a Namenode, a datanode, a secondary namenode is also 
 started in the same machine.
 * 8 client run on different machine.
 * all data local
 * 22GB data size
 I measure the performance before and after the optimization.
 The results are:
 ||client||time before optimization||time after optimization||
 | 8 | 1665.07s | 1242.45s |
 The patch is uploaded. What I did is the following:
 # create new RPC getAllServerAddress which obtain all the addresses of 
 regionserver 
 # client call the RPC to obtain the addresses, choose one of them randomly, 
 validate the address and use the address as the regionLocation address
 # add a cache serverAddressMap to avoid redundant RPC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947793#comment-13947793
 ] 

ramkrishna.s.vasudevan commented on HBASE-10772:


Would like bring up a discussion here, can open a relevant JIRA for that.
HBASE-10771 adds some geXXX and putXXX type of APIs, put() will add a PBR 
itself. 
While working on BR inside BlockCache, the current way of BlockCache that use 
offheap - tries to slice the given DBB and uses a DBB buffer pool.  So if we 
need such use cases with BRs instead of DBB and those BRs should be offheap, 
how do we do that.  So should we add some APIs like slice, duplicate() in DBB 
to BRs and try reusing the allocated BRs? Currently we don't have any offheap 
based BR.  So if we try to create a BR that is backed by DBB and to ensure the 
current behaviour works of BR pools etc we may need such things in BR also. 
What do you think?

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-10838:
--

 Summary: AcessController covering permission check checking only 
one Cell's permission
 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2


{code}
get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
{code}
Setting this returns only one cell irrespective of the number of versions 
/qualifiers in a family.  Instead we should have used the limit option to 
next() call





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-10839:
---

 Summary: NullPointerException in construction of RegionServer in 
Security Cluster
 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Priority: Critical


The initialization of secure rpc server  depends on regionserver's servername 
and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
secure rpc services.

[~jxiang]

{code}
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
at 
org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10838:
---

Attachment: HBASE-10838.patch

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10838:
---

Component/s: security

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10838:
---

Status: Patch Available  (was: Open)

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10118) Major compact keeps deletes with future timestamps

2014-03-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947828#comment-13947828
 ] 

Liu Shaohui commented on HBASE-10118:
-

After a look at the failed build, the javadoc and findbugs warnings, and the 
failed test have no relations with this patch.

 Major compact keeps deletes with future timestamps
 --

 Key: HBASE-10118
 URL: https://issues.apache.org/jira/browse/HBASE-10118
 Project: HBase
  Issue Type: Bug
  Components: Compaction, Deletes, regionserver
Reporter: Max Lapan
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10118-trunk-v1.diff, HBASE-10118-trunk-v2.diff


 Hello!
 During migration from HBase 0.90.6 to 0.94.6 we found changed behaviour in 
 how major compact handles delete markers with timestamps in the future. 
 Before HBASE-4721 major compact purged deletes regardless of their timestamp. 
 Newer versions keep them in HFile until timestamp not reached.
 I guess this happened due to new check in ScanQueryMatcher 
 {{EnvironmentEdgeManager.currentTimeMillis() - timestamp) = 
 timeToPurgeDeletes}}.
 This can be worked around by specifying large negative value in 
 {{hbase.hstore.time.to.purge.deletes}} option, but, unfortunately, negative 
 values are pulled up to zero by Math.max in HStore.java.
 Maybe, we are trying to do something weird by specifing delete timestamp in 
 future, but HBASE-4721 definitely breaks old behaviour we rely on.
 Steps to reproduce this:
 {code}
 put 'test', 'delmeRow', 'delme:something', 'hello'
 flush 'test'
 delete 'test', 'delmeRow', 'delme:something', 1394161431061
 flush 'test'
 major_compact 'test'
 {code}
 Before major_compact we have two hfiles with the following:
 {code}
 first:
 K: delmeRow/delme:something/1384161431061/Put/vlen=5/ts=0
 second:
 K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
 {code}
 After major compact we get the following:
 {code}
 K: delmeRow/delme:something/1394161431061/DeleteColumn/vlen=0/ts=0
 {code}
 In our installation, we resolved this by removing Math.max and setting 
 hbase.hstore.time.to.purge.deletes to Integer.MIN_VALUE, which purges delete 
 markers, and it looks like a solution. But, maybe, there are better approach.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7781) Update security unit tests to use a KDC if available

2014-03-26 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947831#comment-13947831
 ] 

Liu Shaohui commented on HBASE-7781:


Any progress of this issue? 

 Update security unit tests to use a KDC if available
 

 Key: HBASE-7781
 URL: https://issues.apache.org/jira/browse/HBASE-7781
 Project: HBase
  Issue Type: Test
  Components: security, test
Reporter: Gary Helmling
Assignee: ramkrishna.s.vasudevan
Priority: Blocker

 We currently have large holes in the test coverage of HBase with security 
 enabled.  Two recent examples of bugs which really should have been caught 
 with testing are HBASE-7771 and HBASE-7772.  The long standing problem with 
 testing with security enabled has been the requirement for supporting 
 kerberos infrastructure.
 We need to close this gap and provide some automated testing with security 
 enabled, if necessary standing up and provisioning a temporary KDC as an 
 option for running integration tests, see HADOOP-8078 and HADOOP-9004 where a 
 similar approach was taken.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947882#comment-13947882
 ] 

Hadoop QA commented on HBASE-10838:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636903/HBASE-10838.patch
  against trunk revision .
  ATTACHMENT ID: 12636903

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.access.TestAccessController

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestTableMapReduceBase.testMultiRegionTable(TestTableMapReduceBase.java:96)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9096//console

This message is automatically generated.

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10838:
---

Attachment: HBASE-10838_V2.patch

Oops!  Forgot adding the 2 lines in setUp() in V1 patch.

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10827) Making HBase use multiple ethernet cards will improve the performance

2014-03-26 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947945#comment-13947945
 ] 

Jonathan Hsieh commented on HBASE-10827:


Yes.  Why not use that mechanism instead?

 Making HBase use multiple ethernet cards will improve the performance
 -

 Key: HBASE-10827
 URL: https://issues.apache.org/jira/browse/HBASE-10827
 Project: HBase
  Issue Type: New Feature
Affects Versions: 0.99.0
Reporter: zhaojianbo
Assignee: zhaojianbo
 Attachments: HBASE-10827-0.98-branch.patch


 In our online cluster, usually there are multiple ethernet cards in one 
 machine, one for outer network, one for inner network. But the current 
 version of HBase can not use all of them which waste the network bandwidth of 
 one ethernet card. If we make HBase use multiple ethernet cards concurrently, 
 the performance of HBase will be improved.
 So I did the work, and test a simple scenario:
 8 clients scan the same region data from a different machine with two 
 ethernet cards.(machine of regionserver also with two ethernet cards)
 The Environment is:
 * I start HBase cluster with a master, a regionserver, a zookeeper in a 
 machine.
 * HDFS cluster with a Namenode, a datanode, a secondary namenode is also 
 started in the same machine.
 * 8 client run on different machine.
 * all data local
 * 22GB data size
 I measure the performance before and after the optimization.
 The results are:
 ||client||time before optimization||time after optimization||
 | 8 | 1665.07s | 1242.45s |
 The patch is uploaded. What I did is the following:
 # create new RPC getAllServerAddress which obtain all the addresses of 
 regionserver 
 # client call the RPC to obtain the addresses, choose one of them randomly, 
 validate the address and use the address as the regionLocation address
 # add a cache serverAddressMap to avoid redundant RPC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10840:
---

Attachment: HBASE-10840.patch

Just reverting the 2 lines changes in 
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction

private variable conf is not used.  Just keeping it as before.

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10840:
---

Description: Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch


 Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10840:
---

Status: Patch Available  (was: Open)

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch


 Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-10840:
--

 Summary: Fix findbug warn induced by HBASE-10569
 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947966#comment-13947966
 ] 

Ted Yu commented on HBASE-10840:


+1

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch


 Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-10839:
-

Assignee: Jimmy Xiang

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical

 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13947994#comment-13947994
 ] 

Jimmy Xiang commented on HBASE-10839:
-

[~lshmouse], I haven't tested the secure deployment.  I will fix this and make 
sure HBASE-10569 is good with secure deployment.

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical

 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948016#comment-13948016
 ] 

Hadoop QA commented on HBASE-10838:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636922/HBASE-10838_V2.patch
  against trunk revision .
  ATTACHMENT ID: 12636922

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.master.TestRegionPlacement

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9097//console

This message is automatically generated.

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948029#comment-13948029
 ] 

Hadoop QA commented on HBASE-10840:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636929/HBASE-10840.patch
  against trunk revision .
  ATTACHMENT ID: 12636929

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testHBase3583(TestRegionObserverInterface.java:244)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9098//console

This message is automatically generated.

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch


 Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10825) Add copy-from option to ExportSnapshot

2014-03-26 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-10825:


   Resolution: Fixed
Fix Version/s: 0.98.2
   0.99.0
   0.96.2
   Status: Resolved  (was: Patch Available)

 Add copy-from option to ExportSnapshot
 --

 Key: HBASE-10825
 URL: https://issues.apache.org/jira/browse/HBASE-10825
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.96.2, 0.99.0, 0.98.2

 Attachments: HBASE-10825-v0.patch


 To simplify the Import of a snapshot add a -copy-from option.
 That is basically doing the manual  -Dhbase.rootdir=hdfs://srv/hbase/ 
 -Dfs.defaultFS=hdfs://srv/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10838) AcessController covering permission check checking only one Cell's permission

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948065#comment-13948065
 ] 

Andrew Purtell commented on HBASE-10838:


bq.  Instead we should have used the limit option to next() call

+1

Will commit this shortly and spin RC2

 AcessController covering permission check checking only one Cell's permission
 -

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948069#comment-13948069
 ] 

Andrew Purtell commented on HBASE-10772:


bq. Currently we don't have any offheap based BR. So if we try to create a BR 
that is backed by DBB and to ensure the current behaviour works of BR pools etc 
we may need such things in BR also

I suggest trying a BR backed by a DBB and adding only the minimum of what is 
needed. 

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) Insufficient AccessController covering permission check

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10838:
---

Summary: Insufficient AccessController covering permission check  (was: 
AccessController covering permission check checking only one Cell's permission)

 Insufficient AccessController covering permission check
 ---

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) AccessController covering permission check checking only one Cell's permission

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10838:
---

Summary: AccessController covering permission check checking only one 
Cell's permission  (was: AcessController covering permission check checking 
only one Cell's permission)

 AccessController covering permission check checking only one Cell's permission
 --

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948098#comment-13948098
 ] 

Matt Corgan commented on HBASE-10772:
-

{quote}I suggest trying a BR backed by a DBB and adding only the minimum of 
what is needed.{quote}i was going to say the same.

Maybe the putShort/putInt/putLong methods should pass directly through to the 
ones on ByteBuffer, so they should use the same byte format?

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10818) Add integration test for bulkload with replicas

2014-03-26 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10818:
-

Attachment: HBASE-10818.02.patch

Attaching new patch that addresses all but the final review comment.

bq. The above maybe a typo: new InterruptedException - new 
InterruptedIOException

These are removed, instead the methods throw InterruptedExceptions.

bq. you can chain the above calls since Scan object is returned.

And yet {{setCacheBlocks}} and {{setBatch}} don't. Should file a separate 
ticket to fix that API bug.

bq. Was a 'break' missing from the else block ?

My understanding is the loop condition will break execution immediately 
thereafter. I'll add a break just the same, it should make the intent more 
obvious to a reader.

bq. You should set the sleepTime in SlowMeCopro to something like 10 seconds. 
It defaults to 0.

This is tricky. There isn't a way to modify the state of that coproc on a real 
cluster. Maybe an endpoint could be added?

 Add integration test for bulkload with replicas
 ---

 Key: HBASE-10818
 URL: https://issues.apache.org/jira/browse/HBASE-10818
 Project: HBase
  Issue Type: Sub-task
Affects Versions: hbase-10070
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Attachments: HBASE-10818.00.patch, HBASE-10818.01.patch, 
 HBASE-10818.02.patch, IntegrationTestBulkLoad_replicas.log


 Should verify bulkload is not affected by region replicas.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948099#comment-13948099
 ] 

ramkrishna.s.vasudevan commented on HBASE-10772:


Yes.. That is what I have tried and things work upto the HFileBlock.  From 
HFileBlock layer onwards will be a big change.  Will continue doing that.

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10838) Insufficient AccessController covering permission check

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948105#comment-13948105
 ] 

ramkrishna.s.vasudevan commented on HBASE-10838:


Actually delete tries to check all versions but still this fails i think it is 
because the storelimit is applied after every call to ScanQueryMatcher.match(). 
Nice find. +1.

 Insufficient AccessController covering permission check
 ---

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948102#comment-13948102
 ] 

Andrew Purtell commented on HBASE-10772:


bq. Maybe the putShort/putInt/putLong methods should pass directly through to 
the ones on ByteBuffer, so they should use the same byte format?

+1 on same byte format

Let's make sure they are always inlineable. 

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10841) Scan setters should consistently return this

2014-03-26 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-10841:


 Summary: Scan setters should consistently return this
 Key: HBASE-10841
 URL: https://issues.apache.org/jira/browse/HBASE-10841
 Project: HBase
  Issue Type: Improvement
  Components: Client, Usability
Affects Versions: 0.99.0
Reporter: Nick Dimiduk
Priority: Minor


While addressing review comments on HBASE-10818, I noticed that our {{Scan}} 
class is inconsistent with it's setter methods. Some of them return {{this}}, 
other's don't. They should be consistent. I suggest making them all return 
{{this}}, to support chained invocation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10840:
---

Attachment: HBASE-10840_V2.patch

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch, HBASE-10840_V2.patch


 Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948113#comment-13948113
 ] 

ramkrishna.s.vasudevan commented on HBASE-10772:


bq.Maybe the putShort/putInt/putLong methods
Yes, +1.
But I think still we may have to have somethings similar to slice(), 
duplicate().  AS said am trying to reduce the things needed in BR.  I think 
Anoop also had the need for few things like this in his work.  So will 
consolidate and come up with a patch by tomorrow.

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948114#comment-13948114
 ] 

stack commented on HBASE-10018:
---

+1 on 1 for your stated reasons.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10841) Scan setters should consistently return this

2014-03-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948117#comment-13948117
 ] 

Anoop Sam John commented on HBASE-10841:


We reverted similar fix some time back due to BC.   So you suggest we can 
correct it in trunk breaking BC? (breaking binary compatibility)

 Scan setters should consistently return this
 

 Key: HBASE-10841
 URL: https://issues.apache.org/jira/browse/HBASE-10841
 Project: HBase
  Issue Type: Improvement
  Components: Client, Usability
Affects Versions: 0.99.0
Reporter: Nick Dimiduk
Priority: Minor

 While addressing review comments on HBASE-10818, I noticed that our {{Scan}} 
 class is inconsistent with it's setter methods. Some of them return {{this}}, 
 other's don't. They should be consistent. I suggest making them all return 
 {{this}}, to support chained invocation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10018) Change the location prefetch

2014-03-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948123#comment-13948123
 ] 

stack commented on HBASE-10018:
---

Patch looks great (on skim).  You don't deprecated getClosestRowOrBefore.  It 
is needed?  If not, deprecate it on commit.

 Change the location prefetch
 

 Key: HBASE-10018
 URL: https://issues.apache.org/jira/browse/HBASE-10018
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.99.0

 Attachments: 10018.v1.patch, 10018.v2.patch, 10018.v4.patch, 
 10018v3.patch


 Issues with prefetching are:
 - we do two calls to meta: one for the exact row, one for the prefetch 
 - it's done in a lock
 - we take the next 10 regions. Why 10, why the 10 next?
 - is it useful if the table has 100K regions?
 Options are:
 - just remove it
 - replace it with a reverse scan: this would save a call
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) Insufficient AccessController covering permission check

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10838:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and 0.98. 

 Insufficient AccessController covering permission check
 ---

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10841) Scan setters should consistently return this

2014-03-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948154#comment-13948154
 ] 

Nick Dimiduk commented on HBASE-10841:
--

That's unfortunate. We are free to break binary compatibility across major and 
minor releases. Patch releases must maintain binary compatibility. I would 
target 0.99.0 for this patch.

 Scan setters should consistently return this
 

 Key: HBASE-10841
 URL: https://issues.apache.org/jira/browse/HBASE-10841
 Project: HBase
  Issue Type: Improvement
  Components: Client, Usability
Affects Versions: 0.99.0
Reporter: Nick Dimiduk
Priority: Minor

 While addressing review comments on HBASE-10818, I noticed that our {{Scan}} 
 class is inconsistent with it's setter methods. Some of them return {{this}}, 
 other's don't. They should be consistent. I suggest making them all return 
 {{this}}, to support chained invocation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10531:
---

Status: Patch Available  (was: Open)

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
 HBASE-10531_12.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, 
 HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, 
 HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10676) Removing ThreadLocal of PrefetchedHeader in HFileBlock.FSReaderV2 make higher perforamce of scan

2014-03-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948164#comment-13948164
 ] 

stack commented on HBASE-10676:
---

Patch lgtm.  [~lhofhansl] What you think boss?

 Removing ThreadLocal of PrefetchedHeader in HFileBlock.FSReaderV2 make higher 
 perforamce of scan
 

 Key: HBASE-10676
 URL: https://issues.apache.org/jira/browse/HBASE-10676
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.99.0
Reporter: zhaojianbo
Assignee: zhaojianbo
 Attachments: HBASE-10676-0.98-branch-AtomicReferenceV2.patch, 
 HBASE-10676-0.98-branchV2.patch


 PrefetchedHeader variable in HFileBlock.FSReaderV2 is used for avoiding 
 backward seek operation as the comment said:
 {quote}
 we will not incur a backward seek operation if we have already read this 
 block's header as part of the previous read's look-ahead. And we also want to 
 skip reading the header again if it has already been read.
 {quote}
 But that is not the case. In the code of 0.98, prefetchedHeader is 
 threadlocal for one storefile reader, and in the RegionScanner 
 lifecycle,different rpc handlers will serve scan requests of the same 
 scanner. Even though one handler of previous scan call prefetched the next 
 block header, the other handlers of current scan call will still trigger a 
 backward seek operation. The process is like this:
 # rs handler1 serves the scan call, reads block1 and prefetches the header of 
 block2
 # rs handler2 serves the same scanner's next scan call, because rs handler2 
 doesn't know the header of block2 already prefetched by rs handler1, triggers 
 a backward seek and reads block2, and prefetches the header of block3.
 It is not the sequential read. So I think that the threadlocal is useless, 
 and should be abandoned. I did the work, and evaluated the performance of one 
 client, two client and four client scanning the same region with one 
 storefile.  The test environment is
 # A hdfs cluster with a namenode, a secondary namenode , a datanode in a 
 machine
 # A hbase cluster with a zk, a master, a regionserver in the same machine
 # clients are also in the same machine.
 So all the data is local. The storefile is about 22.7GB from our online data, 
 18995949 kvs. Caching is set 1000. And setCacheBlocks(false)
 With the improvement, the client total scan time decreases 21% for the one 
 client case, 11% for the two clients case. But the four clients case is 
 almost the same. The details tests' data is the following:
 ||case||client||time(ms)||
 | original | 1 | 306222 |
 | new | 1 | 241313 |
 | original | 2 | 416390 |
 | new | 2 | 369064 |
 | original | 4 | 555986 |
 | new | 4 | 562152 |
 With some modification(see the comments below), the newest result is 
 ||case||client||time(ms)||case||client||time(ms)||case||client||time(ms)||
 |original|1|306222|new with synchronized|1|239510|new with 
 AtomicReference|1|241243|
 |original|2|416390|new with synchronized|2|365367|new with 
 AtomicReference|2|368952|
 |original|4|555986|new with synchronized|4|540642|new with 
 AtomicReference|4|545715|
 |original|8|854029|new with synchronized|8|852137|new with 
 AtomicReference|8|850401|



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10531:
---

Status: Open  (was: Patch Available)

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
 HBASE-10531_2.patch, HBASE-10531_3.patch, HBASE-10531_4.patch, 
 HBASE-10531_5.patch, HBASE-10531_6.patch, HBASE-10531_7.patch, 
 HBASE-10531_8.patch, HBASE-10531_9.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-26 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10531:
---

Attachment: HBASE-10531_12.patch

Latest patch.  All test cases passes with this.  The bug in the 
SamePrefixComparator in previous patch has been corrected.

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
 HBASE-10531_12.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, 
 HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, 
 HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948172#comment-13948172
 ] 

stack commented on HBASE-10830:
---

Looks like this issue is in 0.98.0 and that if I go back to 0.96.0, I can't run 
the command-line pasted.

 Integration test MR jobs attempt to load htrace jars from the wrong location
 

 Key: HBASE-10830
 URL: https://issues.apache.org/jira/browse/HBASE-10830
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.2


 The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
 htrace JAR from the local Maven cache but get confused and use a HDFS URI.
 {noformat}
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec  
 FAILURE!
 testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
   Time elapsed: 0.488 sec   ERROR!
 java.io.FileNotFoundException: File does not exist: 
 hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
 at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
 at 
 org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948177#comment-13948177
 ] 

Andrew Purtell commented on HBASE-10830:


bq. I can't run the command-line pasted.

You mean mvn -DskipTests clean install  cd hbase-it  mvn verify? This is 
what the online manual says to do to run hbase-it

 Integration test MR jobs attempt to load htrace jars from the wrong location
 

 Key: HBASE-10830
 URL: https://issues.apache.org/jira/browse/HBASE-10830
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.2


 The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
 htrace JAR from the local Maven cache but get confused and use a HDFS URI.
 {noformat}
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec  
 FAILURE!
 testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
   Time elapsed: 0.488 sec   ERROR!
 java.io.FileNotFoundException: File does not exist: 
 hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
 at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
 at 
 org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10841) Scan setters should consistently return this

2014-03-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948178#comment-13948178
 ] 

Nick Dimiduk commented on HBASE-10841:
--

It looks like this task should cascade to all implementations of 
{{Attributes}}, which includes at least: {{OperationWithAttributes}}, 
{{Query}}, {{Scan}}, {{InternalScan}} and {{Get}}. Does that sound familiar, 
[~anoop.hbase]?

 Scan setters should consistently return this
 

 Key: HBASE-10841
 URL: https://issues.apache.org/jira/browse/HBASE-10841
 Project: HBase
  Issue Type: Improvement
  Components: Client, Usability
Affects Versions: 0.99.0
Reporter: Nick Dimiduk
Priority: Minor

 While addressing review comments on HBASE-10818, I noticed that our {{Scan}} 
 class is inconsistent with it's setter methods. Some of them return {{this}}, 
 other's don't. They should be consistent. I suggest making them all return 
 {{this}}, to support chained invocation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948181#comment-13948181
 ] 

Andrew Purtell commented on HBASE-10830:


bq. Looks like this issue is in 0.98.0

Could easily be so, IIRC I only used IntegrationTestsDriver against a cluster 
last time. 

 Integration test MR jobs attempt to load htrace jars from the wrong location
 

 Key: HBASE-10830
 URL: https://issues.apache.org/jira/browse/HBASE-10830
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.2


 The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
 htrace JAR from the local Maven cache but get confused and use a HDFS URI.
 {noformat}
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec  
 FAILURE!
 testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
   Time elapsed: 0.488 sec   ERROR!
 java.io.FileNotFoundException: File does not exist: 
 hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
 at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
 at 
 org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10829) Flush is skipped after log replay if the last recovered edits file is skipped

2014-03-26 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10829:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to 0.99, 0.98 and 0.96. Thanks for reviews. 

 Flush is skipped after log replay if the last recovered edits file is skipped
 -

 Key: HBASE-10829
 URL: https://issues.apache.org/jira/browse/HBASE-10829
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.99.0, 0.98.2, 0.96.3

 Attachments: hbase-10829_v1.patch, hbase-10829_v2.patch, 
 hbase-10829_v3.patch


 We caught this in an extended test run where IntegrationTestBigLinkedList 
 failed with some missing keys. 
 The problem is that HRegion.replayRecoveredEdits() would return -1 if all the 
 edits in the log file is skipped, which is true for example if the log file 
 only contains a single compaction record (HBASE-2231) or somehow the edits 
 cannot be applied (column family deleted, etc). 
 The callee, HRegion.replayRecoveredEditsIfAny() only looks for the last 
 returned seqId to decide whether a flush is necessary or not before opening 
 the region, and discarding replayed recovered edits files. 
 Therefore, if the last recovered edits file is skipped but some edits from 
 earlier recovered edits files are applied, the mandatory flush before opening 
 the region is skipped. If the region server dies after this point before a 
 flush, the edits are lost. 
 This is important to fix, though the sequence of events are super rare for a 
 production cluster. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10839:


Attachment: hbase-10839.patch

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-10839.patch


 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10839:


Status: Patch Available  (was: Open)

Attached a patch that I am testing now.

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-10839.patch


 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10772) Use ByteRanges instead of ByteBuffers in BlockCache

2014-03-26 Thread Matt Corgan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948187#comment-13948187
 ] 

Matt Corgan commented on HBASE-10772:
-

{quote}But I think still we may have to have somethings similar to slice(), 
duplicate(){quote}i think that's ok, but keep in mind that the ByteRange is 
reusable in tight loops, so you may sometimes be able to do something like 
byteRange.shallowCopyOf(otherByteRange)

 Use ByteRanges instead of ByteBuffers in BlockCache
 ---

 Key: HBASE-10772
 URL: https://issues.apache.org/jira/browse/HBASE-10772
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan

 Try replacing the BBs with Byte Ranges in Block cache.  See if this can be 
 done in a pluggable way.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-8889) TestIOFencing#testFencingAroundCompaction occasionally fails

2014-03-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-8889:
--

Status: Patch Available  (was: Reopened)

 TestIOFencing#testFencingAroundCompaction occasionally fails
 

 Key: HBASE-8889
 URL: https://issues.apache.org/jira/browse/HBASE-8889
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Blocker
 Fix For: 1.0.0

 Attachments: 8889-v1.txt, TestIOFencing-#8362.tar.gz, 
 TestIOFencing.tar.gz


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6232//testReport/org.apache.hadoop.hbase/TestIOFencing/testFencingAroundCompaction/
  :
 {code}
 java.lang.AssertionError: Timed out waiting for new server to open region
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.apache.hadoop.hbase.TestIOFencing.doTest(TestIOFencing.java:269)
   at 
 org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:205)
 {code}
 {code}
 2013-07-06 23:13:53,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:54,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] 
 hbase.TestIOFencing$CompactionBlockerRegion(102): allowing compactions
 2013-07-06 23:13:55,121 INFO  [pool-1-thread-1] 
 hbase.HBaseTestingUtility(911): Shutting down minicluster
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): 
 Shutting down HBase Cluster
 2013-07-06 23:13:55,121 INFO  
 [RS:0;asf002:39065-smallCompactions-1373152134716] regionserver.HStore(951): 
 Starting compaction of 2 file(s) in family of 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. into 
 tmpdir=hdfs://localhost:50140/user/jenkins/hbase/tabletest/6e62d3b24ea23160931362b60359ff03/.tmp,
  totalSize=108.4k
 ...
 2013-07-06 23:13:55,155 INFO  [RS:0;asf002:39065] 
 regionserver.HRegionServer(2476): Received CLOSE for the region: 
 6e62d3b24ea23160931362b60359ff03 ,which we are already trying to CLOSE
 2013-07-06 23:13:55,157 WARN  [RS:0;asf002:39065] 
 regionserver.HRegionServer(2414): Failed to close 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. - ignoring and 
 continuing
 org.apache.hadoop.hbase.exceptions.NotServingRegionException: The region 
 6e62d3b24ea23160931362b60359ff03 was already closing. New CLOSE request is 
 ignored.
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2479)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2409)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2011)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:903)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:337)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1131)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41)
   at org.apache.hadoop.hbase.security.User.call(User.java:420)
   at org.apache.hadoop.hbase.security.User.access$300(User.java:51)
   at 
 org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948199#comment-13948199
 ] 

stack commented on HBASE-10839:
---

lgtm

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-10839.patch


 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10842) Some logger not declared static final

2014-03-26 Thread Richard Ding (JIRA)
Richard Ding created HBASE-10842:


 Summary: Some logger not declared static final
 Key: HBASE-10842
 URL: https://issues.apache.org/jira/browse/HBASE-10842
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.1.1, 0.98.1
Reporter: Richard Ding
Assignee: Richard Ding
Priority: Minor


In a few of source files, the logger is defined as 

{code}
private final Log LOG = LogFactory.getLog(MyClass.class);
{code}

This should be changed to static final.

One question is about the following declaration:

{code}
private final Log LOG = LogFactory.getLog(this.getClass());
{code}

In this form, the logger can be shared by derived classes. But one will get NPE 
when logging in methods that are invoked inside the constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948202#comment-13948202
 ] 

stack commented on HBASE-10830:
---

Issue is in 0.96.x too if I pass hadoop2 profile (I get NPE trying to start 
datanodes in  minidfs if I pass no profile; i.e. we use hadoop1).  So, this is 
an old issue probably not worth sinking an RC for but we should fix it.

 Integration test MR jobs attempt to load htrace jars from the wrong location
 

 Key: HBASE-10830
 URL: https://issues.apache.org/jira/browse/HBASE-10830
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.2


 The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
 htrace JAR from the local Maven cache but get confused and use a HDFS URI.
 {noformat}
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec  
 FAILURE!
 testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
   Time elapsed: 0.488 sec   ERROR!
 java.io.FileNotFoundException: File does not exist: 
 hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
 at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
 at 
 org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10830) Integration test MR jobs attempt to load htrace jars from the wrong location

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948209#comment-13948209
 ] 

Andrew Purtell commented on HBASE-10830:


bq. So, this is an old issue probably not worth sinking an RC for but we should 
fix it.

Agree, doesn't make sense to sink a RC without a patch available first at least.

 Integration test MR jobs attempt to load htrace jars from the wrong location
 

 Key: HBASE-10830
 URL: https://issues.apache.org/jira/browse/HBASE-10830
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.2


 The MapReduce jobs submitted by IntegrationTestImportTsv want to load the 
 htrace JAR from the local Maven cache but get confused and use a HDFS URI.
 {noformat}
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 8.489 sec  
 FAILURE!
 testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv)
   Time elapsed: 0.488 sec   ERROR!
 java.io.FileNotFoundException: File does not exist: 
 hdfs://localhost:37548/home/apurtell/.m2/repository/org/cloudera/htrace/htrace-core/2.04/htrace-core-2.04.jar
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
 at 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
 at org.apache.hadoop.hbase.mapreduce.ImportTsv.run(ImportTsv.java:603)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:270)
 at 
 org.apache.hadoop.hbase.mapreduce.TestImportTsv.doMROnTableTest(TestImportTsv.java:232)
 at 
 org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv.testGenerateAndLoad(IntegrationTestImportTsv.java:206)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10837) Filters failing to compare negative numbers (int,float,double or long)

2014-03-26 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-10837.
--

Resolution: Duplicate

Looks like the create issue button double-posted.

 Filters failing to compare negative numbers (int,float,double or long) 
 ---

 Key: HBASE-10837
 URL: https://issues.apache.org/jira/browse/HBASE-10837
 Project: HBase
  Issue Type: Improvement
  Components: Filters
Affects Versions: 0.94.2
 Environment: Pseudo Mode
Reporter: Chandraprakash Sahu
Priority: Blocker
  Labels: features

  I have come across an issue while using filters to get a result.
 For eg.
 I have created a table and its specifications are as follows :
 table name -- test
 column family -- cf
 row keys -- rowKey1 - rowKey10 (10 different row keys)
 column qualifier -- integerData
 For different rowkeys, the qualifier 'integerData' contains either positive 
 or negative integer values (data loaded randomly).
 Now, while I am trying to retrieve the data from the table based on a filter 
 condition, its failing to give the desired result.
 For eg. say,
 My table contains following data :
 [-50,-40,-30,-20,-10,10,20,30,40,50]
 I want to get only those values which are greater than or equal to 40.
 Following is the code for the filter set on scan :
 
   Scan scan = new Scan();
   scan.addColumn(Bytes.toBytes(cf), Bytes.toBytes( integerData));

   int i = 40;
   Filter filter = new ValueFilter(CompareOp.GREATER_OR_EQUAL,new 
 BinaryComparator(Bytes.toBytes(i)));
   scan.setFilter(filter);
 *
 The result should be : 40 and 50
 BUT, the actual result is : -50, -40, -30, -20, -10, 40, 50
 I have read few posts which addressed this issue, and few people provided the 
 solution as:
 1) write a custom comparator, as BinaryComparator is not meant for number 
 comparison
 OR
 2) retrieve all the values as integer and then compare
 BUT, I want to know if there is any other way to achieve this.
 Because this seems to be a very basic need, i.e. comparing numbers, and I 
 feel HBase should have something straight forward to deal with this.
 This comparison fails only when the negative numbers are involved.
 I am not able to get the right way to do it.
 My Hbase version is 0.94.2 and I am running it in pseudo mode.
 Can anyone help me on this ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10836) Filters failing to compare negative numbers (int,float,double or long)

2014-03-26 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-10836.
--

Resolution: Invalid

Resolving question that should be posted to user@.

 Filters failing to compare negative numbers (int,float,double or long)
 --

 Key: HBASE-10836
 URL: https://issues.apache.org/jira/browse/HBASE-10836
 Project: HBase
  Issue Type: Brainstorming
  Components: Filters
Affects Versions: 0.94.2
 Environment: Psuedo-mode
Reporter: Chaitanya Kumar
  Labels: features

 I have come across an issue while using filters to get a result.
 For eg.
 I have created a table and its specifications are as follows :
 table name -- test
 column family -- cf
 row keys -- rowKey1 - rowKey10 (10 different row keys)
 column qualifier -- integerData
 For different rowkeys, the qualifier 'integerData' contains either positive 
 or negative integer values (data loaded randomly).
 Now, while I am trying to retrieve the data from the table based on a filter 
 condition, its failing to give the desired result.
 For eg. say,
 My table contains following data :
 [-50,-40,-30,-20,-10,10,20,30,40,50]
 I want to get only those values which are greater than or equal to 40.
 Following is the code for the filter set on scan :
 
   Scan scan = new Scan();
   scan.addColumn(Bytes.toBytes(cf), Bytes.toBytes( integerData));

   int i = 40;
   Filter filter = new ValueFilter(CompareOp.GREATER_OR_EQUAL,new 
 BinaryComparator(Bytes.toBytes(i)));
   scan.setFilter(filter);
 *
 The result should be : 40 and 50
 BUT, the actual result is : -50, -40, -30, -20, -10, 40, 50
 I have read few posts which addressed this issue, and few people provided the 
 solution as:
 1) write a custom comparator, as BinaryComparator is not meant for number 
 comparison
 OR
 2) retrieve all the values as integer and then compare
 BUT, I want to know if there is any other way to achieve this.
 Because this seems to be a very basic need, i.e. comparing numbers, and I 
 feel HBase should have something straight forward to deal with this.
 This comparison fails only when the negative numbers are involved.
 I am not able to get the right way to do it.
 My Hbase version is 0.94.2 and I am running it in pseudo mode.
 Can anyone help me on this ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10836) Filters failing to compare negative numbers (int,float,double or long)

2014-03-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948222#comment-13948222
 ] 

Nick Dimiduk commented on HBASE-10836:
--

This is a question for the 
[user@|http://mail-archives.apache.org/mod_mbox/hbase-user/] mailing list, not 
a jira ticket.

HBase is a byte[] comparison machine at it's core. It is unaware of user-level 
data types. If you want to order/compare/filter integers natively in hbase, you 
need to use an encoding scheme that preserves order. {{Bytes#toBytes(int)}} 
does not. Have a look at the {{OrderedBytes}} utility class included in 0.96 
for alternative numeric encodings. You may also be interested in HBASE-10789.

 Filters failing to compare negative numbers (int,float,double or long)
 --

 Key: HBASE-10836
 URL: https://issues.apache.org/jira/browse/HBASE-10836
 Project: HBase
  Issue Type: Brainstorming
  Components: Filters
Affects Versions: 0.94.2
 Environment: Psuedo-mode
Reporter: Chaitanya Kumar
  Labels: features

 I have come across an issue while using filters to get a result.
 For eg.
 I have created a table and its specifications are as follows :
 table name -- test
 column family -- cf
 row keys -- rowKey1 - rowKey10 (10 different row keys)
 column qualifier -- integerData
 For different rowkeys, the qualifier 'integerData' contains either positive 
 or negative integer values (data loaded randomly).
 Now, while I am trying to retrieve the data from the table based on a filter 
 condition, its failing to give the desired result.
 For eg. say,
 My table contains following data :
 [-50,-40,-30,-20,-10,10,20,30,40,50]
 I want to get only those values which are greater than or equal to 40.
 Following is the code for the filter set on scan :
 
   Scan scan = new Scan();
   scan.addColumn(Bytes.toBytes(cf), Bytes.toBytes( integerData));

   int i = 40;
   Filter filter = new ValueFilter(CompareOp.GREATER_OR_EQUAL,new 
 BinaryComparator(Bytes.toBytes(i)));
   scan.setFilter(filter);
 *
 The result should be : 40 and 50
 BUT, the actual result is : -50, -40, -30, -20, -10, 40, 50
 I have read few posts which addressed this issue, and few people provided the 
 solution as:
 1) write a custom comparator, as BinaryComparator is not meant for number 
 comparison
 OR
 2) retrieve all the values as integer and then compare
 BUT, I want to know if there is any other way to achieve this.
 Because this seems to be a very basic need, i.e. comparing numbers, and I 
 feel HBase should have something straight forward to deal with this.
 This comparison fails only when the negative numbers are involved.
 I am not able to get the right way to do it.
 My Hbase version is 0.94.2 and I am running it in pseudo mode.
 Can anyone help me on this ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10836) Filters failing to compare negative numbers (int,float,double or long)

2014-03-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948224#comment-13948224
 ] 

Nick Dimiduk commented on HBASE-10836:
--

That's 
[OrderedBytes|http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/OrderedBytes.html].

 Filters failing to compare negative numbers (int,float,double or long)
 --

 Key: HBASE-10836
 URL: https://issues.apache.org/jira/browse/HBASE-10836
 Project: HBase
  Issue Type: Brainstorming
  Components: Filters
Affects Versions: 0.94.2
 Environment: Psuedo-mode
Reporter: Chaitanya Kumar
  Labels: features

 I have come across an issue while using filters to get a result.
 For eg.
 I have created a table and its specifications are as follows :
 table name -- test
 column family -- cf
 row keys -- rowKey1 - rowKey10 (10 different row keys)
 column qualifier -- integerData
 For different rowkeys, the qualifier 'integerData' contains either positive 
 or negative integer values (data loaded randomly).
 Now, while I am trying to retrieve the data from the table based on a filter 
 condition, its failing to give the desired result.
 For eg. say,
 My table contains following data :
 [-50,-40,-30,-20,-10,10,20,30,40,50]
 I want to get only those values which are greater than or equal to 40.
 Following is the code for the filter set on scan :
 
   Scan scan = new Scan();
   scan.addColumn(Bytes.toBytes(cf), Bytes.toBytes( integerData));

   int i = 40;
   Filter filter = new ValueFilter(CompareOp.GREATER_OR_EQUAL,new 
 BinaryComparator(Bytes.toBytes(i)));
   scan.setFilter(filter);
 *
 The result should be : 40 and 50
 BUT, the actual result is : -50, -40, -30, -20, -10, 40, 50
 I have read few posts which addressed this issue, and few people provided the 
 solution as:
 1) write a custom comparator, as BinaryComparator is not meant for number 
 comparison
 OR
 2) retrieve all the values as integer and then compare
 BUT, I want to know if there is any other way to achieve this.
 Because this seems to be a very basic need, i.e. comparing numbers, and I 
 feel HBase should have something straight forward to deal with this.
 This comparison fails only when the negative numbers are involved.
 I am not able to get the right way to do it.
 My Hbase version is 0.94.2 and I am running it in pseudo mode.
 Can anyone help me on this ? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10788) Add 99th percentile of latency in PE

2014-03-26 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948242#comment-13948242
 ] 

Nick Dimiduk commented on HBASE-10788:
--

nit: this patch has some extra ws. Please clean it up on commit.

The thread name is not relevant in MR mode, and is redundant in your example 
output above.

{noformat}
+  String metricName =
+  testName + -Client- + Thread.currentThread().getName() + 
-testRowTime;
{noformat}

This will inaccurately count any results from tests that respect --sampleRate.

{noformat}
+long startTime = System.currentTimeMillis();
 testRow(i);
+latency.update(System.currentTimeMillis() - startTime);
{noformat}

 Add 99th percentile of latency in PE
 

 Key: HBASE-10788
 URL: https://issues.apache.org/jira/browse/HBASE-10788
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.99.0
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-10788-trunk-v1.diff, HBASE-10788-trunk-v2.diff


 In production env, 99th percentile of latency is more important than the avg. 
 The 99th percentile is helpful to measure the influence of GC, slow 
 read/write of HDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10842) Some logger not declared static final

2014-03-26 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated HBASE-10842:
-

Attachment: HBASE-10842.patch

This patch fixed the first case. 

 Some logger not declared static final
 -

 Key: HBASE-10842
 URL: https://issues.apache.org/jira/browse/HBASE-10842
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.1, 0.96.1.1
Reporter: Richard Ding
Assignee: Richard Ding
Priority: Minor
 Attachments: HBASE-10842.patch


 In a few of source files, the logger is defined as 
 {code}
 private final Log LOG = LogFactory.getLog(MyClass.class);
 {code}
 This should be changed to static final.
 One question is about the following declaration:
 {code}
 private final Log LOG = LogFactory.getLog(this.getClass());
 {code}
 In this form, the logger can be shared by derived classes. But one will get 
 NPE when logging in methods that are invoked inside the constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10804) Add a validations step to ExportSnapshot

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10804:
---

Fix Version/s: (was: 0.98.2)
   0.98.1

 Add a validations step to ExportSnapshot
 

 Key: HBASE-10804
 URL: https://issues.apache.org/jira/browse/HBASE-10804
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Affects Versions: 0.99.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10804-v0.patch


 HBASE-10111 added the validation on restore/clone.
 Add the same check at the end of the export, just to avoid the exit 0 if 
 something went wrong post copy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10825) Add copy-from option to ExportSnapshot

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10825:
---

Fix Version/s: (was: 0.98.2)
   0.98.1

 Add copy-from option to ExportSnapshot
 --

 Key: HBASE-10825
 URL: https://issues.apache.org/jira/browse/HBASE-10825
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10825-v0.patch


 To simplify the Import of a snapshot add a -copy-from option.
 That is basically doing the manual  -Dhbase.rootdir=hdfs://srv/hbase/ 
 -Dfs.defaultFS=hdfs://srv/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10828) TestRegionObserverInterface#testHBase3583 should wait for all regions to be assigned

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10828:
---

Fix Version/s: (was: 0.98.2)
   0.98.1

 TestRegionObserverInterface#testHBase3583 should wait for all regions to be 
 assigned
 

 Key: HBASE-10828
 URL: https://issues.apache.org/jira/browse/HBASE-10828
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.98.1, 0.99.0

 Attachments: 10828-v1.txt


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/9086/testReport/org.apache.hadoop.hbase.coprocessor/TestRegionObserverInterface/testHBase3583/
  :
 {code}
 2014-03-25 07:28:14,714 DEBUG [RegionOpenAndInitThread-testHBase3583-1] 
 regionserver.HRegion(563): Instantiated testHBase3583,,1395732494518. 
   0cf2d2dd97dfedc860c5aa76c193e3e4.
 2014-03-25 07:28:14,714 DEBUG [RegionOpenAndInitThread-testHBase3583-1] 
 regionserver.HRegion(1037): Closing 
 testHBase3583,,1395732494518.0cf2d2dd97dfedc860c5aa76c193e3e4.: disabling 
 compactions  flushes
 2014-03-25 07:28:14,714 DEBUG [RegionOpenAndInitThread-testHBase3583-1] 
 regionserver.HRegion(1064): Updates disabled for region 
 testHBase3583,,1395732494518.   0cf2d2dd97dfedc860c5aa76c193e3e4.
 ...
 2014-03-25 07:28:14,763 INFO  [AM.ZK.Worker-pool3-t15] 
 master.RegionStates(316): Transitioned {0cf2d2dd97dfedc860c5aa76c193e3e4 
 state=PENDING_OPEN, ts=1395732494729,   
 server=asf002.sp2.ygridcore.net,45836,1395732453985} to 
 {0cf2d2dd97dfedc860c5aa76c193e3e4 state=OPENING, ts=1395732494763, 
 server=asf002.sp2.ygridcore.net,45836,   1395732453985}
 2014-03-25 07:28:14,778 INFO  [RS_LOG_REPLAY_OPS-asf002:45836-0-Writer-2] 
 zookeeper.RecoverableZooKeeper(120): Process identifier=hconnection-0x145ee7f 
 connecting to   ZooKeeper ensemble=localhost:50878
 2014-03-25 07:28:14,784 DEBUG 
 [RS_LOG_REPLAY_OPS-asf002:45836-0-Writer-2-EventThread] 
 zookeeper.ZooKeeperWatcher(309): hconnection-0x145ee7f, 
 quorum=localhost:50878,   baseZNode=/hbase Received ZooKeeper Event, 
 type=None, state=SyncConnected, path=null
 2014-03-25 07:28:14,785 DEBUG 
 [RS_LOG_REPLAY_OPS-asf002:45836-0-Writer-2-EventThread] 
 zookeeper.ZooKeeperWatcher(393): hconnection-0x145ee7f-0x144f82314c6001b 
 connected
 2014-03-25 07:28:14,788 INFO  
 [StoreOpener-0cf2d2dd97dfedc860c5aa76c193e3e4-1] 
 compactions.CompactionConfiguration(88): size [134217728, 
 9223372036854775807); files [3,10); ratio 1.20; off-peak ratio 
 5.00; throttle point 2684354560; delete expired; major period 60480, 
 major jitter 0.50
 2014-03-25 07:28:14,793 DEBUG [RS_LOG_REPLAY_OPS-asf002:45836-0-Writer-2] 
 zookeeper.ZKUtil(689): hconnection-0x145ee7f-0x144f82314c6001b, 
 quorum=localhost:50878,   baseZNode=/hbase Unable to get data of 
 znode /hbase/table/TestTable.testRecovery because node does not exist (not an 
 error)
 2014-03-25 07:28:14,796 INFO  
 [StoreOpener-0cf2d2dd97dfedc860c5aa76c193e3e4-1] 
 compactions.CompactionConfiguration(88): size [134217728, 
 9223372036854775807); files [3,10); ratio 1.20; off-peak ratio 
 5.00; throttle point 2684354560; delete expired; major period 60480, 
 major jitter 0.50
 2014-03-25 07:28:14,808 ERROR [Priority.RpcServer.handler=7,port=45836] 
 coprocessor.CoprocessorHost(482): The coprocessor 
 org.apache.hadoop.hbase.coprocessor.  SimpleRegionObserver threw 
 an unexpected exception
 java.lang.AssertionError
   at org.junit.Assert.fail(Assert.java:86)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.junit.Assert.assertNotNull(Assert.java:621)
   at org.junit.Assert.assertNotNull(Assert.java:631)
   at 
 org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver.postGetClosestRowBefore(SimpleRegionObserver.java:512)
   at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postGetClosestRowBefore(RegionCoprocessorHost.java:970)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1821)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2851)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29493)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2020)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
   at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:162)
   at 
 org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
   at 
 

[jira] [Updated] (HBASE-10842) Some logger not declared static final

2014-03-26 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated HBASE-10842:
-

Status: Patch Available  (was: Open)

 Some logger not declared static final
 -

 Key: HBASE-10842
 URL: https://issues.apache.org/jira/browse/HBASE-10842
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.1.1, 0.98.1
Reporter: Richard Ding
Assignee: Richard Ding
Priority: Minor
 Attachments: HBASE-10842.patch


 In a few of source files, the logger is defined as 
 {code}
 private final Log LOG = LogFactory.getLog(MyClass.class);
 {code}
 This should be changed to static final.
 One question is about the following declaration:
 {code}
 private final Log LOG = LogFactory.getLog(this.getClass());
 {code}
 In this form, the logger can be shared by derived classes. But one will get 
 NPE when logging in methods that are invoked inside the constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10840) Fix findbug warn induced by HBASE-10569

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948289#comment-13948289
 ] 

Hadoop QA commented on HBASE-10840:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636952/HBASE-10840_V2.patch
  against trunk revision .
  ATTACHMENT ID: 12636952

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9099//console

This message is automatically generated.

 Fix findbug warn induced by HBASE-10569
 ---

 Key: HBASE-10840
 URL: https://issues.apache.org/jira/browse/HBASE-10840
 Project: HBase
  Issue Type: Bug
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Minor
 Attachments: HBASE-10840.patch, HBASE-10840_V2.patch


 Unknown bug pattern URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD in 
 org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$CostFunction.conf



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7847) Use zookeeper multi to clear znodes

2014-03-26 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948300#comment-13948300
 ] 

Rakesh R commented on HBASE-7847:
-

Thanks [~stack] for pointing me to these cases. I'll tests and will update you 
tomorrow.

 Use zookeeper multi to clear znodes
 ---

 Key: HBASE-7847
 URL: https://issues.apache.org/jira/browse/HBASE-7847
 Project: HBase
  Issue Type: Sub-task
Reporter: Ted Yu
Assignee: Rakesh R
 Attachments: 7847-v1.txt, 7847_v6.patch, 7847_v6.patch, 
 HBASE-7847.patch, HBASE-7847.patch, HBASE-7847.patch, HBASE-7847_v4.patch, 
 HBASE-7847_v5.patch, HBASE-7847_v6.patch


 In ZKProcedureUtil, clearChildZNodes() and clearZNodes(String procedureName) 
 should utilize zookeeper multi so that they're atomic



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10838) Insufficient AccessController covering permission check

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10838:
---

Fix Version/s: (was: 0.98.2)
   0.98.1

 Insufficient AccessController covering permission check
 ---

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10829) Flush is skipped after log replay if the last recovered edits file is skipped

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10829:
---

Fix Version/s: (was: 0.98.2)
   0.98.1

 Flush is skipped after log replay if the last recovered edits file is skipped
 -

 Key: HBASE-10829
 URL: https://issues.apache.org/jira/browse/HBASE-10829
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.98.1, 0.99.0, 0.96.3

 Attachments: hbase-10829_v1.patch, hbase-10829_v2.patch, 
 hbase-10829_v3.patch


 We caught this in an extended test run where IntegrationTestBigLinkedList 
 failed with some missing keys. 
 The problem is that HRegion.replayRecoveredEdits() would return -1 if all the 
 edits in the log file is skipped, which is true for example if the log file 
 only contains a single compaction record (HBASE-2231) or somehow the edits 
 cannot be applied (column family deleted, etc). 
 The callee, HRegion.replayRecoveredEditsIfAny() only looks for the last 
 returned seqId to decide whether a flush is necessary or not before opening 
 the region, and discarding replayed recovered edits files. 
 Therefore, if the last recovered edits file is skipped but some edits from 
 earlier recovered edits files are applied, the mandatory flush before opening 
 the region is skipped. If the region server dies after this point before a 
 flush, the edits are lost. 
 This is important to fix, though the sequence of events are super rare for a 
 production cluster. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10842) Some logger not declared static final

2014-03-26 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated HBASE-10842:
-

Attachment: HBASE-10842.patch

Missed two files. Attach new patch.

 Some logger not declared static final
 -

 Key: HBASE-10842
 URL: https://issues.apache.org/jira/browse/HBASE-10842
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.1, 0.96.1.1
Reporter: Richard Ding
Assignee: Richard Ding
Priority: Minor
 Attachments: HBASE-10842.patch, HBASE-10842.patch


 In a few of source files, the logger is defined as 
 {code}
 private final Log LOG = LogFactory.getLog(MyClass.class);
 {code}
 This should be changed to static final.
 One question is about the following declaration:
 {code}
 private final Log LOG = LogFactory.getLog(this.getClass());
 {code}
 In this form, the logger can be shared by derived classes. But one will get 
 NPE when logging in methods that are invoked inside the constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10843) Prepare HBase for java 8

2014-03-26 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-10843:
-

 Summary: Prepare HBase for java 8
 Key: HBASE-10843
 URL: https://issues.apache.org/jira/browse/HBASE-10843
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.89-fb
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.89-fb






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10825) Add copy-from option to ExportSnapshot

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948379#comment-13948379
 ] 

Hudson commented on HBASE-10825:


SUCCESS: Integrated in HBase-TRUNK #5042 (See 
[https://builds.apache.org/job/HBase-TRUNK/5042/])
HBASE-10825 Add copy-from option to ExportSnapshot (mbertozzi: rev 1581909)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Add copy-from option to ExportSnapshot
 --

 Key: HBASE-10825
 URL: https://issues.apache.org/jira/browse/HBASE-10825
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10825-v0.patch


 To simplify the Import of a snapshot add a -copy-from option.
 That is basically doing the manual  -Dhbase.rootdir=hdfs://srv/hbase/ 
 -Dfs.defaultFS=hdfs://srv/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10838) Insufficient AccessController covering permission check

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948380#comment-13948380
 ] 

Hudson commented on HBASE-10838:


SUCCESS: Integrated in HBase-TRUNK #5042 (See 
[https://builds.apache.org/job/HBase-TRUNK/5042/])
HBASE-10838 Insufficient AccessController covering permission check (Anoop Sam 
John) (apurtell: rev 1581939)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


 Insufficient AccessController covering permission check
 ---

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10829) Flush is skipped after log replay if the last recovered edits file is skipped

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948378#comment-13948378
 ] 

Hudson commented on HBASE-10829:


SUCCESS: Integrated in HBase-TRUNK #5042 (See 
[https://builds.apache.org/job/HBase-TRUNK/5042/])
HBASE-10829 Flush is skipped after log replay if the last recovered edits file 
is skipped (enis: rev 1581947)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java


 Flush is skipped after log replay if the last recovered edits file is skipped
 -

 Key: HBASE-10829
 URL: https://issues.apache.org/jira/browse/HBASE-10829
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.98.1, 0.99.0, 0.96.3

 Attachments: hbase-10829_v1.patch, hbase-10829_v2.patch, 
 hbase-10829_v3.patch


 We caught this in an extended test run where IntegrationTestBigLinkedList 
 failed with some missing keys. 
 The problem is that HRegion.replayRecoveredEdits() would return -1 if all the 
 edits in the log file is skipped, which is true for example if the log file 
 only contains a single compaction record (HBASE-2231) or somehow the edits 
 cannot be applied (column family deleted, etc). 
 The callee, HRegion.replayRecoveredEditsIfAny() only looks for the last 
 returned seqId to decide whether a flush is necessary or not before opening 
 the region, and discarding replayed recovered edits files. 
 Therefore, if the last recovered edits file is skipped but some edits from 
 earlier recovered edits files are applied, the mandatory flush before opening 
 the region is skipped. If the region server dies after this point before a 
 flush, the edits are lost. 
 This is important to fix, though the sequence of events are super rare for a 
 production cluster. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10531) Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948408#comment-13948408
 ] 

Hadoop QA commented on HBASE-10531:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636953/HBASE-10531_12.patch
  against trunk revision .
  ATTACHMENT ID: 12636953

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 34 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ qualCommonPrefix  (current.lastCommonPrefix - (3 + 
right.getRowLength() + right
+ListDataBlockEncoder.EncodedSeeker encodedSeekers = new 
ArrayListDataBlockEncoder.EncodedSeeker();
+HFileBlockEncodingContext encodingCtx = 
getEncodingContext(Compression.Algorithm.NONE, encoding);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9100//console

This message is automatically generated.

 Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo
 

 Key: HBASE-10531
 URL: https://issues.apache.org/jira/browse/HBASE-10531
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10531.patch, HBASE-10531_1.patch, 
 HBASE-10531_12.patch, HBASE-10531_2.patch, HBASE-10531_3.patch, 
 HBASE-10531_4.patch, HBASE-10531_5.patch, HBASE-10531_6.patch, 
 HBASE-10531_7.patch, HBASE-10531_8.patch, HBASE-10531_9.patch


 Currently the byte[] key passed to HFileScanner.seekTo and 
 HFileScanner.reseekTo, is a combination of row, cf, qual, type and ts.  And 
 the caller forms this by using kv.getBuffer, which is actually deprecated.  
 So see how this can be achieved considering kv.getBuffer is removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10842) Some logger not declared static final

2014-03-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948435#comment-13948435
 ] 

Ted Yu commented on HBASE-10842:


lgtm

 Some logger not declared static final
 -

 Key: HBASE-10842
 URL: https://issues.apache.org/jira/browse/HBASE-10842
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.1, 0.96.1.1
Reporter: Richard Ding
Assignee: Richard Ding
Priority: Minor
 Attachments: HBASE-10842.patch, HBASE-10842.patch


 In a few of source files, the logger is defined as 
 {code}
 private final Log LOG = LogFactory.getLog(MyClass.class);
 {code}
 This should be changed to static final.
 One question is about the following declaration:
 {code}
 private final Log LOG = LogFactory.getLog(this.getClass());
 {code}
 In this form, the logger can be shared by derived classes. But one will get 
 NPE when logging in methods that are invoked inside the constructors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10825) Add copy-from option to ExportSnapshot

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948448#comment-13948448
 ] 

Hudson commented on HBASE-10825:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #236 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/236/])
HBASE-10825 Add copy-from option to ExportSnapshot (mbertozzi: rev 1581911)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java


 Add copy-from option to ExportSnapshot
 --

 Key: HBASE-10825
 URL: https://issues.apache.org/jira/browse/HBASE-10825
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.96.2, 0.98.1, 0.99.0

 Attachments: HBASE-10825-v0.patch


 To simplify the Import of a snapshot add a -copy-from option.
 That is basically doing the manual  -Dhbase.rootdir=hdfs://srv/hbase/ 
 -Dfs.defaultFS=hdfs://srv/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10838) Insufficient AccessController covering permission check

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948449#comment-13948449
 ] 

Hudson commented on HBASE-10838:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #236 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/236/])
HBASE-10838 Insufficient AccessController covering permission check (Anoop Sam 
John) (apurtell: rev 1581941)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


 Insufficient AccessController covering permission check
 ---

 Key: HBASE-10838
 URL: https://issues.apache.org/jira/browse/HBASE-10838
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Critical
 Fix For: 0.98.1, 0.99.0

 Attachments: HBASE-10838.patch, HBASE-10838_V2.patch


 {code}
 get.setMaxResultsPerColumnFamily(1); // Hold down memory use on wide rows
 {code}
 Setting this returns only one cell irrespective of the number of versions 
 /qualifiers in a family.  Instead we should have used the limit option to 
 next() call



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10829) Flush is skipped after log replay if the last recovered edits file is skipped

2014-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948447#comment-13948447
 ] 

Hudson commented on HBASE-10829:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #236 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/236/])
HBASE-10829 Flush is skipped after log replay if the last recovered edits file 
is skipped (enis: rev 1581954)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java


 Flush is skipped after log replay if the last recovered edits file is skipped
 -

 Key: HBASE-10829
 URL: https://issues.apache.org/jira/browse/HBASE-10829
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.98.1, 0.99.0, 0.96.3

 Attachments: hbase-10829_v1.patch, hbase-10829_v2.patch, 
 hbase-10829_v3.patch


 We caught this in an extended test run where IntegrationTestBigLinkedList 
 failed with some missing keys. 
 The problem is that HRegion.replayRecoveredEdits() would return -1 if all the 
 edits in the log file is skipped, which is true for example if the log file 
 only contains a single compaction record (HBASE-2231) or somehow the edits 
 cannot be applied (column family deleted, etc). 
 The callee, HRegion.replayRecoveredEditsIfAny() only looks for the last 
 returned seqId to decide whether a flush is necessary or not before opening 
 the region, and discarding replayed recovered edits files. 
 Therefore, if the last recovered edits file is skipped but some edits from 
 earlier recovered edits files are applied, the mandatory flush before opening 
 the region is skipped. If the region server dies after this point before a 
 flush, the edits are lost. 
 This is important to fix, though the sequence of events are super rare for a 
 production cluster. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10742) Data temperature aware compaction policy

2014-03-26 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948468#comment-13948468
 ] 

Vladimir Rodionov commented on HBASE-10742:
---

{quote}
Reading Identifying Hot and Cold Data in Main-Memory Databases 
{quote}

I am going to host a session @HBaseCon 2014 (HBase: Extreme makeover) and it 
turned out that a part of this presentation is related to the subj. In a few 
words, one can keep track of a hotness of a cached blocks by using of eviction 
data (LRU - last access time, LFU - total number of accesses) making periodic 
sampling of a cache and keeping histogram of eviction data distribution, then 
you can easily request which quantile a particular block belongs to. If its 
0.8- its quite hot, if its 0.2 it is going to be purged soon. I am using this 
technique to dynamically re-compress blocks in a cache. The decision which 
codec to use is based on a data hotness.
 

 Data temperature aware compaction policy
 

 Key: HBASE-10742
 URL: https://issues.apache.org/jira/browse/HBASE-10742
 Project: HBase
  Issue Type: Brainstorming
Reporter: Andrew Purtell

 Reading Identifying Hot and Cold Data in Main-Memory Databases (Levandoski, 
 Larson, and Stoica), it occurred to me that some of the motivation applies to 
 HBase and some of the results can inform a data temperature aware compaction 
 policy implementation.
 We also wish to optimize retention of cells in the working set in memory, in 
 blockcache. 
 We can also consider further and related performance optimizations in HBase 
 that awareness of hot and cold data can enable, even for cases where the 
 working set does not fit in memory. If we could partition HFiles into hot and 
 cold (cold+lukewarm) and move cells between them at compaction time, then we 
 could:
 - Migrate hot HFiles onto alternate storage tiers with improved read latency 
 and throughput characteristics. This has been discussed before on HBASE-6572. 
 Or, migrate cold HFiles to an archival tier.
 - Preload hot HFiles into blockcache to increase cache hit rates, especially 
 when regions are first brought online. And/or add another LRU priority to 
 increase the likelihood of retention of blocks in hot HFiles. This could be 
 sufficiently different from ARC to avoid issues there. 
 - Reduce the compaction priorities of cold HFiles, with proportional 
 reduction in priority IO and write amplification, since cold files would less 
 frequently participate in reads.
 Levandoski et. al. describe determining data temperature with low overhead 
 using an out of band estimation process running in the background over an 
 access log. We could consider logging reads along with mutations and 
 similarly process the result in the background. The WAL could be overloaded 
 to carry access log records, or we could follow the approach described in the 
 paper and maintain an in memory access log only. 
 {quote}
 We chose the offline approach for several reasons. First, as mentioned 
 earlier, the overhead of even the simplest caching scheme is very high. 
 Second, the offline approach is generic and requires minimum changes to the 
 database engine. Third, logging imposes very little overhead during normal 
 operation. Finally, it allows flexibility in when, where, and how to analyze 
 the log and estimate access frequencies. For instance, the analysis can be 
 done on a separate machine, thus reducing overhead on the system running the 
 transactional workloads.
 {quote}
 Importantly, they only log a sample of all accesses.
 {quote}
 To implement sampling, we have each worker thread flip a biased coin before 
 starting a new query (where bias correlates with sample rate). The thread 
 records its accesses in log buffers (or not) based on the outcome of the coin 
 flip. In Section V, we report experimental results showing that sampling 10% 
 of the accesses reduces the accuracy by only 2.5%,
 {quote}
 Likewise we would only record a subset of all accesses to limit overheads.
 The offline process estimates access frequencies over discrete time slices 
 using exponential smoothing. (Markers representing time slice boundaries are 
 interleaved with access records in the log.) Forward and backward 
 classification algorithms are presented. The forward algorithm requires a 
 full scan over the log and storage proportional to the number of unique cell 
 addresses, while the backward algorithm requires reading a least the tail of 
 the log in reverse order.
 If we overload the WAL to carry the access log, offline data temperature 
 estimation can piggyback as a WAL listener. The forward algorithm would then 
 be a natural choice. The HBase master is fairly idle most of the time and 
 less memory hungry as a regionserver, at least in today's architecture. We 
 could probably 

[jira] [Updated] (HBASE-10834) Better error messaging on issuing grant commands in non-authz mode

2014-03-26 Thread Srikanth Srungarapu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Srungarapu updated HBASE-10834:


Attachment: HBASE-10834.patch

Verified the patch locally.

 Better error messaging on issuing grant commands in non-authz mode
 --

 Key: HBASE-10834
 URL: https://issues.apache.org/jira/browse/HBASE-10834
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.94.17
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Trivial
 Attachments: HBASE-10834.patch


 Running the below sequence of steps should give a better error messaging 
 rather than table not found error. 
 hbase(main):013:0 create test, {NAME='f1'}
 gr0 row(s) in 6.1320 seconds
 hbase(main):014:0 disable test
 drop test
 0 row(s) in 10.2100 seconds
 hbase(main):015:0 drop test
 0 row(s) in 1.0500 seconds
 hbase(main):016:0 create test, {NAME='f1'}
 0 row(s) in 1.0510 seconds
 hbase(main):017:0 grant systest, RWXCA, test
 ERROR: Unknown table systest!
 Instead of ERROR: Unknown table systest!, hbase should give out a warning 
 like Command not supported in non-authz mode(as acl table is only created if 
 authz is turned on)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10843) Prepare HBase for java 8

2014-03-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948504#comment-13948504
 ] 

Andrew Purtell commented on HBASE-10843:


There were a couple Java 8 related issues fixed on trunk, but looking over the 
0.89-fb branch, maybe just the HeapSize changes are relevant. Was going to try 
out 0.89-fb with Java 8 (0.89-fb branch from git.apache.org) but there is a pom 
formatting error and a couple of missing dependencies that prevent building. 

 Prepare HBase for java 8
 

 Key: HBASE-10843
 URL: https://issues.apache.org/jira/browse/HBASE-10843
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.89-fb
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.89-fb






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948501#comment-13948501
 ] 

Hadoop QA commented on HBASE-10839:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636956/hbase-10839.patch
  against trunk revision .
  ATTACHMENT ID: 12636956

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testHBase3583(TestRegionObserverInterface.java:244)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9101//console

This message is automatically generated.

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-10839.patch


 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-8889) TestIOFencing#testFencingAroundCompaction occasionally fails

2014-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948548#comment-13948548
 ] 

Hadoop QA commented on HBASE-8889:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12636765/8889-v1.txt
  against trunk revision .
  ATTACHMENT ID: 12636765

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9102//console

This message is automatically generated.

 TestIOFencing#testFencingAroundCompaction occasionally fails
 

 Key: HBASE-8889
 URL: https://issues.apache.org/jira/browse/HBASE-8889
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Blocker
 Fix For: 1.0.0

 Attachments: 8889-v1.txt, TestIOFencing-#8362.tar.gz, 
 TestIOFencing.tar.gz


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6232//testReport/org.apache.hadoop.hbase/TestIOFencing/testFencingAroundCompaction/
  :
 {code}
 java.lang.AssertionError: Timed out waiting for new server to open region
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.apache.hadoop.hbase.TestIOFencing.doTest(TestIOFencing.java:269)
   at 
 org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:205)
 {code}
 {code}
 2013-07-06 23:13:53,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:54,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] 
 hbase.TestIOFencing$CompactionBlockerRegion(102): allowing compactions
 2013-07-06 23:13:55,121 INFO  [pool-1-thread-1] 
 hbase.HBaseTestingUtility(911): Shutting down minicluster
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): 
 Shutting down HBase Cluster
 2013-07-06 23:13:55,121 INFO  
 [RS:0;asf002:39065-smallCompactions-1373152134716] regionserver.HStore(951): 
 Starting compaction of 2 file(s) in family of 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. into 
 tmpdir=hdfs://localhost:50140/user/jenkins/hbase/tabletest/6e62d3b24ea23160931362b60359ff03/.tmp,
  totalSize=108.4k
 ...
 2013-07-06 

[jira] [Commented] (HBASE-8889) TestIOFencing#testFencingAroundCompaction occasionally fails

2014-03-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948552#comment-13948552
 ] 

Ted Yu commented on HBASE-8889:
---

Looped TestIOFencing 40 times on Linux and they passed.

 TestIOFencing#testFencingAroundCompaction occasionally fails
 

 Key: HBASE-8889
 URL: https://issues.apache.org/jira/browse/HBASE-8889
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Blocker
 Fix For: 1.0.0

 Attachments: 8889-v1.txt, TestIOFencing-#8362.tar.gz, 
 TestIOFencing.tar.gz


 From 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6232//testReport/org.apache.hadoop.hbase/TestIOFencing/testFencingAroundCompaction/
  :
 {code}
 java.lang.AssertionError: Timed out waiting for new server to open region
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at org.apache.hadoop.hbase.TestIOFencing.doTest(TestIOFencing.java:269)
   at 
 org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:205)
 {code}
 {code}
 2013-07-06 23:13:53,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:54,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
 Waiting for the new server to pick up the region 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] 
 hbase.TestIOFencing$CompactionBlockerRegion(102): allowing compactions
 2013-07-06 23:13:55,121 INFO  [pool-1-thread-1] 
 hbase.HBaseTestingUtility(911): Shutting down minicluster
 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] util.JVMClusterUtil(237): 
 Shutting down HBase Cluster
 2013-07-06 23:13:55,121 INFO  
 [RS:0;asf002:39065-smallCompactions-1373152134716] regionserver.HStore(951): 
 Starting compaction of 2 file(s) in family of 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. into 
 tmpdir=hdfs://localhost:50140/user/jenkins/hbase/tabletest/6e62d3b24ea23160931362b60359ff03/.tmp,
  totalSize=108.4k
 ...
 2013-07-06 23:13:55,155 INFO  [RS:0;asf002:39065] 
 regionserver.HRegionServer(2476): Received CLOSE for the region: 
 6e62d3b24ea23160931362b60359ff03 ,which we are already trying to CLOSE
 2013-07-06 23:13:55,157 WARN  [RS:0;asf002:39065] 
 regionserver.HRegionServer(2414): Failed to close 
 tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03. - ignoring and 
 continuing
 org.apache.hadoop.hbase.exceptions.NotServingRegionException: The region 
 6e62d3b24ea23160931362b60359ff03 was already closing. New CLOSE request is 
 ignored.
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2479)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2409)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2011)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:903)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:337)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1131)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:41)
   at org.apache.hadoop.hbase.security.User.call(User.java:420)
   at org.apache.hadoop.hbase.security.User.access$300(User.java:51)
   at 
 org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:260)
   at 
 org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:140)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10844) Coprocessor failure during batchmutation leaves the memstore datastructs in an inconsistent state

2014-03-26 Thread Devaraj Das (JIRA)
Devaraj Das created HBASE-10844:
---

 Summary: Coprocessor failure during batchmutation leaves the 
memstore datastructs in an inconsistent state
 Key: HBASE-10844
 URL: https://issues.apache.org/jira/browse/HBASE-10844
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.98.2


Observed this in the testing with Phoenix. The test in Phoenix - 
MutableIndexFailureIT deliberately fails the batchmutation call via the 
installed coprocessor. But the update is not rolled back. That leaves the 
memstore inconsistent. In particular, I observed that getFlushableSize is 
updated before the coprocessor was called but the update is not rolled back. 
When the region is being closed at some later point, the assert introduced in 
HBASE-10514 in the HRegion.doClose() causes the RegionServer to shutdown 
abnormally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10839:


Attachment: hbase-10839_v2.patch

The unit test failed is not related. V1 fixed the NPE. However, there is some 
configuration backward compatibility issue. Attached v2 that uses the original 
master configuration parameters if login fails with the regionserver one.

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-10839.patch, hbase-10839_v2.patch


 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10844) Coprocessor failure during batchmutation leaves the memstore datastructs in an inconsistent state

2014-03-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10844:
---

Fix Version/s: 0.96.3
   0.99.0

 Coprocessor failure during batchmutation leaves the memstore datastructs in 
 an inconsistent state
 -

 Key: HBASE-10844
 URL: https://issues.apache.org/jira/browse/HBASE-10844
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.99.0, 0.98.2, 0.96.3


 Observed this in the testing with Phoenix. The test in Phoenix - 
 MutableIndexFailureIT deliberately fails the batchmutation call via the 
 installed coprocessor. But the update is not rolled back. That leaves the 
 memstore inconsistent. In particular, I observed that getFlushableSize is 
 updated before the coprocessor was called but the update is not rolled back. 
 When the region is being closed at some later point, the assert introduced in 
 HBASE-10514 in the HRegion.doClose() causes the RegionServer to shutdown 
 abnormally.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-10839) NullPointerException in construction of RegionServer in Security Cluster

2014-03-26 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948567#comment-13948567
 ] 

Jimmy Xiang edited comment on HBASE-10839 at 3/26/14 10:29 PM:
---

The unit test failusre is not related. V1 fixed the NPE. However, there is some 
configuration backward compatibility issue. Attached v2 that uses the original 
master configuration parameters if login fails with the regionserver one.


was (Author: jxiang):
The unit test failed is not related. V1 fixed the NPE. However, there is some 
configuration backward compatibility issue. Attached v2 that uses the original 
master configuration parameters if login fails with the regionserver one.

 NullPointerException in construction of RegionServer in Security Cluster
 

 Key: HBASE-10839
 URL: https://issues.apache.org/jira/browse/HBASE-10839
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Liu Shaohui
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-10839.patch, hbase-10839_v2.patch


 The initialization of secure rpc server  depends on regionserver's servername 
 and zooKeeper watcher. But,  After HBASE-10569, they are null when creating 
 secure rpc services.
 [~jxiang]
 {code}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.ipc.RpcServer.createSecretManager(RpcServer.java:1974)
   at org.apache.hadoop.hbase.ipc.RpcServer.start(RpcServer.java:1945)
   at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.init(RSRpcServices.java:706)
   at 
 org.apache.hadoop.hbase.master.MasterRpcServices.init(MasterRpcServices.java:190)
   at 
 org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:297)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.init(HRegionServer.java:431)
   at org.apache.hadoop.hbase.master.HMaster.init(HMaster.java:234)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >