[jira] [Commented] (HBASE-10156) Fix up the HBASE-8755 slowdown when low contention

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863969#comment-13863969
 ] 

Hadoop QA commented on HBASE-10156:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621758/10156v3.txt
  against trunk revision .
  ATTACHMENT ID: 12621758

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8355//console

This message is automatically generated.

> Fix up the HBASE-8755 slowdown when low contention
> --
>
> Key: HBASE-10156
> URL: https://issues.apache.org/jira/browse/HBASE-10156
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: stack
>Assignee: stack
> Attachments: 10156.txt, 10156v2.txt, 10156v3.txt, Disrupting.java
>
>
> HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10156) Fix up the HBASE-8755 slowdown when low contention

2014-01-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863964#comment-13863964
 ] 

stack commented on HBASE-10156:
---

This patch seems SLOWER again when low number of handlers (<10).  It only comes 
into its own when many handlers.   For example, when 100 concurrent handlers, 
does less work finishes the task in 40% less time.  Will post writeup and 
numbers later.

> Fix up the HBASE-8755 slowdown when low contention
> --
>
> Key: HBASE-10156
> URL: https://issues.apache.org/jira/browse/HBASE-10156
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: stack
>Assignee: stack
> Attachments: 10156.txt, 10156v2.txt, 10156v3.txt, Disrupting.java
>
>
> HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10156) Fix up the HBASE-8755 slowdown when low contention

2014-01-06 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10156:
--

Attachment: 10156v3.txt

Another version of the patch.  Lets see what HadoopQA thinks.

> Fix up the HBASE-8755 slowdown when low contention
> --
>
> Key: HBASE-10156
> URL: https://issues.apache.org/jira/browse/HBASE-10156
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: stack
>Assignee: stack
> Attachments: 10156.txt, 10156v2.txt, 10156v3.txt, Disrupting.java
>
>
> HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9846) Integration test and LoadTestTool support for cell ACLs

2014-01-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9846:
--

Status: Patch Available  (was: Open)

> Integration test and LoadTestTool support for cell ACLs
> ---
>
> Key: HBASE-9846
> URL: https://issues.apache.org/jira/browse/HBASE-9846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.0
>
> Attachments: HBASE-9846.patch, HBASE-9846_1.patch, 
> HBASE-9846_2.patch, HBASE-9846_3.patch
>
>
> Cell level ACLs should have an integration test and LoadTestTool support.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9846) Integration test and LoadTestTool support for cell ACLs

2014-01-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9846:
--

Status: Open  (was: Patch Available)

> Integration test and LoadTestTool support for cell ACLs
> ---
>
> Key: HBASE-9846
> URL: https://issues.apache.org/jira/browse/HBASE-9846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.0
>
> Attachments: HBASE-9846.patch, HBASE-9846_1.patch, 
> HBASE-9846_2.patch, HBASE-9846_3.patch
>
>
> Cell level ACLs should have an integration test and LoadTestTool support.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9846) Integration test and LoadTestTool support for cell ACLs

2014-01-06 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9846:
--

Attachment: HBASE-9846_3.patch

> Integration test and LoadTestTool support for cell ACLs
> ---
>
> Key: HBASE-9846
> URL: https://issues.apache.org/jira/browse/HBASE-9846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.0
>
> Attachments: HBASE-9846.patch, HBASE-9846_1.patch, 
> HBASE-9846_2.patch, HBASE-9846_3.patch
>
>
> Cell level ACLs should have an integration test and LoadTestTool support.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6104) Require EXEC permission to call coprocessor endpoints

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863933#comment-13863933
 ] 

Hudson commented on HBASE-6104:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #45 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/45/])
HBASE-6104. Require EXEC permission to call coprocessor endpoints (apurtell: 
rev 1556098)
* /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/EndpointObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java


> Require EXEC permission to call coprocessor endpoints
> -
>
> Key: HBASE-6104
> URL: https://issues.apache.org/jira/browse/HBASE-6104
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6104-addendum-1.patch, 6104-revert.patch, 6104.patch, 
> 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch
>
>
> The EXEC action currently exists as only a placeholder in access control.  It 
> should really be used to enforce access to coprocessor endpoint RPC calls, 
> which are currently unrestricted.
> How the ACLs to support this would be modeled deserves some discussion:
> * Should access be scoped to a specific table and CoprocessorProtocol 
> extension?
> * Should it be possible to grant access to a CoprocessorProtocol 
> implementation globally (regardless of table)?
> * Are per-method restrictions necessary?
> * Should we expose hooks available to endpoint implementors so that they 
> could additionally apply their own permission checks? Some CP endpoints may 
> want to require READ permissions, others may want to enforce WRITE, or READ + 
> WRITE.
> To apply these kinds of checks we would also have to extend the 
> RegionObserver interface to provide hooks wrapping HRegion.exec().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10287:
---

Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10278) Provide better write predictability

2014-01-06 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863921#comment-13863921
 ] 

ramkrishna.s.vasudevan commented on HBASE-10278:


I read this document.  Looks nice.
Few questions, to clarify if my understanding is right,
when there is log switch happening say edits from 1 ... 10 are in WAL A.  Due 
to switch the edits 11 .. 13 are in WAL B.  
Now if this above mentioned thing is to happen then the log roll for WAL A has 
to be completed by blocking all writes?  Will this be costly ? How costly will 
this be.
If the rollwriter happens and at the same time we start taking writes on WAL B 
the above mentioned scenario happens.  so in that case we may have out of order 
edits during log split if this RS crashes right ?.
Currently the assumption is there are 2 WALs per RS and only one of them is 
active.  So how do you plan to make the interface for this, in the sense do you 
have plans to extend this number 2 to something more than 2 ? If so how many of 
them will be active?
the reason am asking this is, the doc says this implementation will form the 
basis for other multi log implementations.  So if that is true, then if is say 
RS.getLog() how many logs should it return?  currently in testcases and in the 
HRS.rollWriter() the rolling happens only on one HLog.  But with multiWAL this 
may change.
I tried out some interfaces for HBASE-8610 inorder to introduce interfaces for 
multi WAL.  A very general use case would be to have MultiWAL per table.  If 
that model needs to fit in here how easy would it be with these interfaces 
introduced in this JIRA.




> Provide better write predictability
> ---
>
> Key: HBASE-10278
> URL: https://issues.apache.org/jira/browse/HBASE-10278
> Project: HBase
>  Issue Type: New Feature
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Attachments: Multiwaldesigndoc.pdf
>
>
> Currently, HBase has one WAL per region server. 
> Whenever there is any latency in the write pipeline (due to whatever reasons 
> such as n/w blip, a node in the pipeline having a bad disk, etc), the overall 
> write latency suffers. 
> Jonathan Hsieh and I analyzed various approaches to tackle this issue. We 
> also looked at HBASE-5699, which talks about adding concurrent multi WALs. 
> Along with performance numbers, we also focussed on design simplicity, 
> minimum impact on MTTR & Replication, and compatibility with 0.96 and 0.98. 
> Considering all these parameters, we propose a new HLog implementation with 
> WAL Switching functionality.
> Please find attached the design doc for the same. It introduces the WAL 
> Switching feature, and experiments/results of a prototype implementation, 
> showing the benefits of this feature.
> The second goal of this work is to serve as a building block for concurrent 
> multiple WALs feature.
> Please review the doc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6104) Require EXEC permission to call coprocessor endpoints

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863918#comment-13863918
 ] 

Hudson commented on HBASE-6104:
---

SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #56 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/56/])
HBASE-6104. Require EXEC permission to call coprocessor endpoints (apurtell: 
rev 1556100)
* /hbase/branches/0.98/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/EndpointObserver.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java


> Require EXEC permission to call coprocessor endpoints
> -
>
> Key: HBASE-6104
> URL: https://issues.apache.org/jira/browse/HBASE-6104
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6104-addendum-1.patch, 6104-revert.patch, 6104.patch, 
> 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch
>
>
> The EXEC action currently exists as only a placeholder in access control.  It 
> should really be used to enforce access to coprocessor endpoint RPC calls, 
> which are currently unrestricted.
> How the ACLs to support this would be modeled deserves some discussion:
> * Should access be scoped to a specific table and CoprocessorProtocol 
> extension?
> * Should it be possible to grant access to a CoprocessorProtocol 
> implementation globally (regardless of table)?
> * Are per-method restrictions necessary?
> * Should we expose hooks available to endpoint implementors so that they 
> could additionally apply their own permission checks? Some CP endpoints may 
> want to require READ permissions, others may want to enforce WRITE, or READ + 
> WRITE.
> To apply these kinds of checks we would also have to extend the 
> RegionObserver interface to provide hooks wrapping HRegion.exec().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863915#comment-13863915
 ] 

Anoop Sam John commented on HBASE-10287:


But result will be null then and 1st line in addResult()
if (result == null) return;

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863908#comment-13863908
 ] 

Ted Yu commented on HBASE-10287:


In mutate(), the switch statement starting at line 2888 covers PUT and DELETE 
as well.
So it is not just Append / Increment.

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863896#comment-13863896
 ] 

Anoop Sam John edited comment on HBASE-10287 at 1/7/14 4:50 AM:


{code}
 if (isClientCellBlockSupport()) {
   builder.setResult(ProtobufUtil.toResultNoData(result));
-  rpcc.setCellScanner(result.cellScanner());
{code}
In case of Append/Increment, rpcc can come as null ?  


was (Author: anoop.hbase):
{code}
 if (isClientCellBlockSupport()) {
   builder.setResult(ProtobufUtil.toResultNoData(result));
-  rpcc.setCellScanner(result.cellScanner());
{code}
In case of Append/Increment, rpcc can come as null also?  

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863896#comment-13863896
 ] 

Anoop Sam John edited comment on HBASE-10287 at 1/7/14 4:51 AM:


{code}
 if (isClientCellBlockSupport()) {
   builder.setResult(ProtobufUtil.toResultNoData(result));
-  rpcc.setCellScanner(result.cellScanner());
{code}
In case of Append/Increment, rpcc can come as null ?   If so, how we will send 
back the Result which the client is expecting back?


was (Author: anoop.hbase):
{code}
 if (isClientCellBlockSupport()) {
   builder.setResult(ProtobufUtil.toResultNoData(result));
-  rpcc.setCellScanner(result.cellScanner());
{code}
In case of Append/Increment, rpcc can come as null ?  

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863896#comment-13863896
 ] 

Anoop Sam John commented on HBASE-10287:


{code}
 if (isClientCellBlockSupport()) {
   builder.setResult(ProtobufUtil.toResultNoData(result));
-  rpcc.setCellScanner(result.cellScanner());
{code}
In case of Append/Increment, rpcc can come as null also?  

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6104) Require EXEC permission to call coprocessor endpoints

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863895#comment-13863895
 ] 

Hudson commented on HBASE-6104:
---

SUCCESS: Integrated in HBase-TRUNK #4796 (See 
[https://builds.apache.org/job/HBase-TRUNK/4796/])
HBASE-6104. Require EXEC permission to call coprocessor endpoints (apurtell: 
rev 1556098)
* /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/EndpointObserver.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java


> Require EXEC permission to call coprocessor endpoints
> -
>
> Key: HBASE-6104
> URL: https://issues.apache.org/jira/browse/HBASE-6104
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6104-addendum-1.patch, 6104-revert.patch, 6104.patch, 
> 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch
>
>
> The EXEC action currently exists as only a placeholder in access control.  It 
> should really be used to enforce access to coprocessor endpoint RPC calls, 
> which are currently unrestricted.
> How the ACLs to support this would be modeled deserves some discussion:
> * Should access be scoped to a specific table and CoprocessorProtocol 
> extension?
> * Should it be possible to grant access to a CoprocessorProtocol 
> implementation globally (regardless of table)?
> * Are per-method restrictions necessary?
> * Should we expose hooks available to endpoint implementors so that they 
> could additionally apply their own permission checks? Some CP endpoints may 
> want to require READ permissions, others may want to enforce WRITE, or READ + 
> WRITE.
> To apply these kinds of checks we would also have to extend the 
> RegionObserver interface to provide hooks wrapping HRegion.exec().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6104) Require EXEC permission to call coprocessor endpoints

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863888#comment-13863888
 ] 

Hudson commented on HBASE-6104:
---

SUCCESS: Integrated in HBase-0.98 #62 (See 
[https://builds.apache.org/job/HBase-0.98/62/])
HBASE-6104. Require EXEC permission to call coprocessor endpoints (apurtell: 
rev 1556100)
* /hbase/branches/0.98/hbase-common/src/main/resources/hbase-default.xml
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/EndpointObserver.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java


> Require EXEC permission to call coprocessor endpoints
> -
>
> Key: HBASE-6104
> URL: https://issues.apache.org/jira/browse/HBASE-6104
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6104-addendum-1.patch, 6104-revert.patch, 6104.patch, 
> 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch
>
>
> The EXEC action currently exists as only a placeholder in access control.  It 
> should really be used to enforce access to coprocessor endpoint RPC calls, 
> which are currently unrestricted.
> How the ACLs to support this would be modeled deserves some discussion:
> * Should access be scoped to a specific table and CoprocessorProtocol 
> extension?
> * Should it be possible to grant access to a CoprocessorProtocol 
> implementation globally (regardless of table)?
> * Are per-method restrictions necessary?
> * Should we expose hooks available to endpoint implementors so that they 
> could additionally apply their own permission checks? Some CP endpoints may 
> want to require READ permissions, others may want to enforce WRITE, or READ + 
> WRITE.
> To apply these kinds of checks we would also have to extend the 
> RegionObserver interface to provide hooks wrapping HRegion.exec().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863885#comment-13863885
 ] 

Hadoop QA commented on HBASE-10263:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12621740/HBASE-10263-trunk_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12621740

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestSplitLogWorker

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8353//console

This message is automatically generated.

> make LruBlockCache single/multi/in-memory ratio user-configurable and provide 
> preemptive mode for in-memory type block
> --
>
> Key: HBASE-10263
> URL: https://issues.apache.org/jira/browse/HBASE-10263
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, 
> HBASE-10263-trunk_v2.patch
>
>
> currently the single/multi/in-memory ratio in LruBlockCache is hardcoded 
> 1:2:1, which can lead to somewhat counter-intuition behavior for some user 
> scenario where in-memory table's read performance is much worse than ordinary 
> table when two tables' data size is almost equal and larger than 
> regionserver's cache size (we ever did some such experiment and verified that 
> in-memory table random read performance is two times worse than ordinary 
> table).
> this patch fixes above issue and provides:
> 1. make single/multi/in-memory ratio user-configurable
> 2. provide a configurable switch which can make in-memory block preemptive, 
> by preemptive means when this switch is on in-memory block can kick out any 
> ordinary block to make room until no ordinary block, when this switch is off 
> (by default) the behavior is the same as previous, using 
> single/multi/in-memory ratio to determine evicting.
> by default, above two changes are both off and the behavior keeps the same as 
> before apply

[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863860#comment-13863860
 ] 

Hudson commented on HBASE-10078:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #44 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/44/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556024)
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte o

[jira] [Commented] (HBASE-10284) Build broken with svn 1.8

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863859#comment-13863859
 ] 

Hudson commented on HBASE-10284:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #44 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/44/])
HBASE-10284 Build broken with svn 1.8 (larsh: rev 1555962)
* /hbase/trunk/hbase-common/src/saveVersion.sh


> Build broken with svn 1.8
> -
>
> Key: HBASE-10284
> URL: https://issues.apache.org/jira/browse/HBASE-10284
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0
>
> Attachments: 10284.txt
>
>
> Just upgraded my machine and found that {{svn info}} displays a "Relative 
> URL:" line in svn 1.8.
> saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863861#comment-13863861
 ] 

Hudson commented on HBASE-10130:


FAILURE: Integrated in HBase-TRUNK-on-Hadoop-1.1 #44 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/44/])
HBASE-10130 TestSplitLogManager#testTaskResigned fails sometimes (Tedyu: rev 
1556040)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java


> TestSplitLogManager#testTaskResigned fails sometimes
> 
>
> Key: HBASE-10130
> URL: https://issues.apache.org/jira/browse/HBASE-10130
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10130-output.txt, 10130-v1.txt, 10130-v2.txt
>
>
> The test failed in 
> https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
> For testTaskResigned() :
> {code}
> int version = ZKUtil.checkExists(zkw, tasknode);
> // Could be small race here.
> if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
> to/2);
> {code}
> There was no log similar to the following (corresponding to waitForCounter() 
> call above):
> {code}
> 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
> milli-secs(wait.for.ratio=[1])
> {code}
> Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
> retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10278) Provide better write predictability

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863843#comment-13863843
 ] 

Sergey Shelukhin commented on HBASE-10278:
--

Skimmed the doc, looks really nice. I do think that out-of-order WAL should 
eventually become ok (we will get per-region mvcc from seqId-mvcc merge, and 
mvcc in WAL from this or several other jiras). One thing I might have missed - 
since it currently requires log rolling, would it need throttling for 
switching? If there's a long sequence of network hiccups from the machine (i.e. 
to both files), it might roll lots of tiny logs.

> Provide better write predictability
> ---
>
> Key: HBASE-10278
> URL: https://issues.apache.org/jira/browse/HBASE-10278
> Project: HBase
>  Issue Type: New Feature
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Attachments: Multiwaldesigndoc.pdf
>
>
> Currently, HBase has one WAL per region server. 
> Whenever there is any latency in the write pipeline (due to whatever reasons 
> such as n/w blip, a node in the pipeline having a bad disk, etc), the overall 
> write latency suffers. 
> Jonathan Hsieh and I analyzed various approaches to tackle this issue. We 
> also looked at HBASE-5699, which talks about adding concurrent multi WALs. 
> Along with performance numbers, we also focussed on design simplicity, 
> minimum impact on MTTR & Replication, and compatibility with 0.96 and 0.98. 
> Considering all these parameters, we propose a new HLog implementation with 
> WAL Switching functionality.
> Please find attached the design doc for the same. It introduces the WAL 
> Switching feature, and experiments/results of a prototype implementation, 
> showing the benefits of this feature.
> The second goal of this work is to serve as a building block for concurrent 
> multiple WALs feature.
> Please review the doc.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block

2014-01-06 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10263:
-

Attachment: HBASE-10263-trunk_v2.patch

new patch attached which adds hbase.lru.blockcache prefix to newly introduced 
config names per [~ndimiduk]'s suggestion, thanks [~ndimiduk] :-)

> make LruBlockCache single/multi/in-memory ratio user-configurable and provide 
> preemptive mode for in-memory type block
> --
>
> Key: HBASE-10263
> URL: https://issues.apache.org/jira/browse/HBASE-10263
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, 
> HBASE-10263-trunk_v2.patch
>
>
> currently the single/multi/in-memory ratio in LruBlockCache is hardcoded 
> 1:2:1, which can lead to somewhat counter-intuition behavior for some user 
> scenario where in-memory table's read performance is much worse than ordinary 
> table when two tables' data size is almost equal and larger than 
> regionserver's cache size (we ever did some such experiment and verified that 
> in-memory table random read performance is two times worse than ordinary 
> table).
> this patch fixes above issue and provides:
> 1. make single/multi/in-memory ratio user-configurable
> 2. provide a configurable switch which can make in-memory block preemptive, 
> by preemptive means when this switch is on in-memory block can kick out any 
> ordinary block to make room until no ordinary block, when this switch is off 
> (by default) the behavior is the same as previous, using 
> single/multi/in-memory ratio to determine evicting.
> by default, above two changes are both off and the behavior keeps the same as 
> before applying this patch. it's client/user's choice to determine whether or 
> which behavior to use by enabling one of these two enhancements.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block

2014-01-06 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863838#comment-13863838
 ] 

Feng Honghua commented on HBASE-10263:
--

Thanks [~ndimiduk] for the careful review :-)
bq.Evictions happen on a background thread. Filling the cache and then 
immediately checking the eviction count results in a race between the current 
thread and the eviction thread; thus this is very likely a flakey test on our 
over-extended build machines. In the above block, the call to cacheBlock() will 
only notify the eviction thread, not force eviction.
What you said is correct for the real usage scenario of LruBlockCache where the 
evictionThread flag is implicitly true when constructing the LruBlockCache 
object, that way a background eviction thread is created to do the eviction 
job, but it's *not* the case for this newly added unit test: to be able to 
verify the evict effect of the new configuration/preemptive-mode as quickly as 
possible without worrying how long to sleep or introducing other kind of 
synchronization overhead, I disabled the background eviction thread when 
constructing the LruBlockCache object for this unit test case, this way the 
eviction will be triggered immediately and synchronously within the 
cache.cacheBlock call when the cache size exceeds the acceptable cache size.
{code}LruBlockCache cache = new LruBlockCache(maxSize, blockSize, 
false...){code}

> make LruBlockCache single/multi/in-memory ratio user-configurable and provide 
> preemptive mode for in-memory type block
> --
>
> Key: HBASE-10263
> URL: https://issues.apache.org/jira/browse/HBASE-10263
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch
>
>
> currently the single/multi/in-memory ratio in LruBlockCache is hardcoded 
> 1:2:1, which can lead to somewhat counter-intuition behavior for some user 
> scenario where in-memory table's read performance is much worse than ordinary 
> table when two tables' data size is almost equal and larger than 
> regionserver's cache size (we ever did some such experiment and verified that 
> in-memory table random read performance is two times worse than ordinary 
> table).
> this patch fixes above issue and provides:
> 1. make single/multi/in-memory ratio user-configurable
> 2. provide a configurable switch which can make in-memory block preemptive, 
> by preemptive means when this switch is on in-memory block can kick out any 
> ordinary block to make room until no ordinary block, when this switch is off 
> (by default) the behavior is the same as previous, using 
> single/multi/in-memory ratio to determine evicting.
> by default, above two changes are both off and the behavior keeps the same as 
> before applying this patch. it's client/user's choice to determine whether or 
> which behavior to use by enabling one of these two enhancements.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10288) make mvcc an (optional) part of KV serialization

2014-01-06 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10288:
-

Component/s: HFile

> make mvcc an (optional) part of KV serialization
> 
>
> Key: HBASE-10288
> URL: https://issues.apache.org/jira/browse/HBASE-10288
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> This has been suggested in HBASE-10241. Mvcc can currently be serialized in 
> HFile, but the mechanism is... magical. We might want to make it a part of 
> proper serialization of the KV. It can be done using tags, but we may not 
> want the overhead given that it will be in many KVs, so it might require 
> HFileFormat vN+1. Regardless, the external  mechanism would need to be 
> removed while also preserving backward compat.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10288) make mvcc an (optional) part of KV serialization

2014-01-06 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-10288:
-

Priority: Minor  (was: Major)

> make mvcc an (optional) part of KV serialization
> 
>
> Key: HBASE-10288
> URL: https://issues.apache.org/jira/browse/HBASE-10288
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Reporter: Sergey Shelukhin
>Priority: Minor
>
> This has been suggested in HBASE-10241. Mvcc can currently be serialized in 
> HFile, but the mechanism is... magical. We might want to make it a part of 
> proper serialization of the KV. It can be done using tags, but we may not 
> want the overhead given that it will be in many KVs, so it might require 
> HFileFormat vN+1. Regardless, the external  mechanism would need to be 
> removed while also preserving backward compat.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863829#comment-13863829
 ] 

Sergey Shelukhin commented on HBASE-10241:
--

 HBASE-10288

> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10288) make mvcc an (optional) part of KV serialization

2014-01-06 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-10288:


 Summary: make mvcc an (optional) part of KV serialization
 Key: HBASE-10288
 URL: https://issues.apache.org/jira/browse/HBASE-10288
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin


This has been suggested in HBASE-10241. Mvcc can currently be serialized in 
HFile, but the mechanism is... magical. We might want to make it a part of 
proper serialization of the KV. It can be done using tags, but we may not want 
the overhead given that it will be in many KVs, so it might require HFileFormat 
vN+1. Regardless, the external  mechanism would need to be removed while also 
preserving backward compat.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863825#comment-13863825
 ] 

Sergey Shelukhin commented on HBASE-10241:
--

I will make a separate unrelated JIRA for this

> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863825#comment-13863825
 ] 

Sergey Shelukhin edited comment on HBASE-10241 at 1/7/14 2:19 AM:
--

I will make a separate unrelated JIRA for that


was (Author: sershe):
I will make a separate unrelated JIRA for this

> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863813#comment-13863813
 ] 

Hudson commented on HBASE-10078:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #55 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/55/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556025)
* 
/hbase/branches/0.98/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> 

[jira] [Commented] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863814#comment-13863814
 ] 

Hudson commented on HBASE-10130:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #55 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/55/])
HBASE-10130. TestSplitLogManager#testTaskResigned fails sometimes (Ted Yu) 
(apurtell: rev 1556051)
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java


> TestSplitLogManager#testTaskResigned fails sometimes
> 
>
> Key: HBASE-10130
> URL: https://issues.apache.org/jira/browse/HBASE-10130
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10130-output.txt, 10130-v1.txt, 10130-v2.txt
>
>
> The test failed in 
> https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
> For testTaskResigned() :
> {code}
> int version = ZKUtil.checkExists(zkw, tasknode);
> // Could be small race here.
> if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
> to/2);
> {code}
> There was no log similar to the following (corresponding to waitForCounter() 
> call above):
> {code}
> 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
> milli-secs(wait.for.ratio=[1])
> {code}
> Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
> retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863812#comment-13863812
 ] 

Hudson commented on HBASE-9593:
---

SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #55 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/55/])
Revert HBASE-9593. Region server left in online servers list forever if it went 
down after registering to master and before creating ephemeral node (apurtell: 
rev 1556055)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerSh

[jira] [Commented] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863807#comment-13863807
 ] 

Hudson commented on HBASE-10130:


SUCCESS: Integrated in HBase-TRUNK #4795 (See 
[https://builds.apache.org/job/HBase-TRUNK/4795/])
HBASE-10130 TestSplitLogManager#testTaskResigned fails sometimes (Tedyu: rev 
1556040)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java


> TestSplitLogManager#testTaskResigned fails sometimes
> 
>
> Key: HBASE-10130
> URL: https://issues.apache.org/jira/browse/HBASE-10130
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10130-output.txt, 10130-v1.txt, 10130-v2.txt
>
>
> The test failed in 
> https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
> For testTaskResigned() :
> {code}
> int version = ZKUtil.checkExists(zkw, tasknode);
> // Could be small race here.
> if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
> to/2);
> {code}
> There was no log similar to the following (corresponding to waitForCounter() 
> call above):
> {code}
> 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
> milli-secs(wait.for.ratio=[1])
> {code}
> Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
> retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863801#comment-13863801
 ] 

Enis Soztutar commented on HBASE-10285:
---

bq. So we're fine allowing this in 0.94 but not (currently) in 0.96+?
Yep, 0.96+ CM (sadly) does not the ability to be run from command line. 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10169) Batch coprocessor

2014-01-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863800#comment-13863800
 ] 

Andrew Purtell commented on HBASE-10169:


I could see how we might some new APIs, but how about:

1. New client API. The porcelain revealed to the user. This should represent 
invoking a coprocessor call on more than one region on a regionserver. Using a 
List type here would be ok. 

2. New wire level API. The client should group regions involved in a the 
processing of #1, create a request object per-regionserver containing multiple 
CoprocessorServiceRequest instances (one per region), and dispatch them. 

{code}
message BatchCoprocessorServiceRequest {
  repeated CoprocessorServiceRequest = 1;
}
{code}

And the response:

{code}
message CoprocessorServiceResponseOrException {
  required CoprocessorServiceResponse response = 1;
// If the operation failed, this exception is set
  optional NameBytesPair exception = 2;
}

message BatchCoprocessorServiceResponse {
  repeated CoprocessorServiceResponseOrException = 1;
}
{code}

3. New server API. RegionServer support for receiving the request of #2, 
dispatching them individually, tracking their execution, and returning the 
combined response. Aggregation of the server side responses should be done 
separately as HBASE-5762. For now, wait for all of the invocations in the RS to 
complete and send each response back. 

If this goes well, perhaps we can deprecate the old API and message types and 
switch to this one as the preferred way to execute coprocessors, so I wouldn't 
worry about having similar client APIs for single coprocessor call and batched 
coprocessor calls at this time.

> Batch coprocessor
> -
>
> Key: HBASE-10169
> URL: https://issues.apache.org/jira/browse/HBASE-10169
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Affects Versions: 0.99.0
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: Batch Coprocessor Design Document.docx, HBASE-10169.patch
>
>
> This is designed to improve the coprocessor invocation in the client side. 
> Currently the coprocessor invocation is to send a call to each region. If 
> there’s one region server, and 100 regions are located in this server, each 
> coprocessor invocation will send 100 calls, each call uses a single thread in 
> the client side. The threads will run out soon when the coprocessor 
> invocations are heavy. 
> In this design, all the calls to the same region server will be grouped into 
> one in a single coprocessor invocation. This call will be spread into each 
> region in the server side, and the results will be merged ahead in the server 
> side before being returned to the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10185) HBaseClient retries even though a DoNotRetryException was thrown

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863799#comment-13863799
 ] 

Lars Hofhansl commented on HBASE-10185:
---

I don't think we should change this behavior in 0.94 at this point, unless this 
is absolutely needed.

> HBaseClient retries even though a DoNotRetryException was thrown
> 
>
> Key: HBASE-10185
> URL: https://issues.apache.org/jira/browse/HBASE-10185
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.94.12
>Reporter: Samarth
> Fix For: 0.94.16
>
>
> Throwing a DoNotRetryIOException inside  Writable.write(Dataoutput) method 
> doesn't prevent HBase from retrying. Debugging the code locally, I figured 
> that the bug lies in the way HBaseClient simply throws an IOException when it 
> sees that a connection has been closed unexpectedly.  
> Method:
> public Writable call(Writable param, InetSocketAddress addr,
>Class protocol,
>User ticket, int rpcTimeout)
> Excerpt of code where the bug is present:
> while (!call.done) {
> if (connection.shouldCloseConnection.get()) {
>   throw new IOException("Unexpected closed connection");
> }
> Throwing this IOException causes the ServerCallable.translateException(t) to 
> be a no-op resulting in HBase retrying. 
> From my limited view and understanding of the code, one way I could think of 
> handling this is by looking at the closeConnection member variable of a 
> connection to determine what kind of exception should be thrown. 
> Specifically, when a connection is closed, the current code does this: 
> protected synchronized void markClosed(IOException e) {
>   if (shouldCloseConnection.compareAndSet(false, true)) {
> closeException = e;
> notifyAll();
>   }
> }
> Within HBaseClient's call method, the code could possibly be modified to:
> while (!call.done) {
> if (connection.shouldCloseConnection.get() ) {
>  if(connection.closeException instanceof   
> DoNotRetryIOException) {
> throw closeException;
> }
>   throw new IOException("Unexpected closed connection");
> }



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863797#comment-13863797
 ] 

Lars Hofhansl edited comment on HBASE-10285 at 1/7/14 1:49 AM:
---

So we're fine allowing this in 0.94 but not (currently) in 0.96+?
If so, I'll commit.


was (Author: lhofhansl):
So we're fine allowing this in 0.94 but not (currently) in 0.96+.
If so, I'll commit.

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863797#comment-13863797
 ] 

Lars Hofhansl commented on HBASE-10285:
---

So we're fine allowing this in 0.94 but not (currently) in 0.96+.
If so, I'll commit.

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9830) Backport HBASE-9605 to 0.94

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863793#comment-13863793
 ] 

Lars Hofhansl commented on HBASE-9830:
--

Can't really put it in 0.94 if 0.96 does not also have it. [~stack], what do 
you think?

> Backport HBASE-9605 to 0.94
> ---
>
> Key: HBASE-9830
> URL: https://issues.apache.org/jira/browse/HBASE-9830
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.3
>Reporter: chendihao
>Priority: Minor
> Fix For: 0.94.17
>
> Attachments: HBASE-9830-0.94-v1.patch
>
>
> Backport HBASE-9605 which is about "Allow AggregationClient to skip 
> specifying column family for row count aggregate"



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10249) Intermittent TestReplicationSyncUpTool failure

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10249:
--

Fix Version/s: (was: 0.94.16)
   0.94.17

> Intermittent TestReplicationSyncUpTool failure
> --
>
> Key: HBASE-10249
> URL: https://issues.apache.org/jira/browse/HBASE-10249
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Demai Ni
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10249-trunk-v0.patch
>
>
> New issue to keep track of this.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9830) Backport HBASE-9605 to 0.94

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-9830:
-

Fix Version/s: (was: 0.94.16)
   0.94.17

> Backport HBASE-9605 to 0.94
> ---
>
> Key: HBASE-9830
> URL: https://issues.apache.org/jira/browse/HBASE-9830
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.3
>Reporter: chendihao
>Priority: Minor
> Fix For: 0.94.17
>
> Attachments: HBASE-9830-0.94-v1.patch
>
>
> Backport HBASE-9605 which is about "Allow AggregationClient to skip 
> specifying column family for row count aggregate"



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863790#comment-13863790
 ] 

Hudson commented on HBASE-9593:
---

SUCCESS: Integrated in HBase-0.98 #61 (See 
[https://builds.apache.org/job/HBase-0.98/61/])
Revert HBASE-9593. Region server left in online servers list forever if it went 
down after registering to master and before creating ephemeral node (apurtell: 
rev 1556055)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssig

[jira] [Updated] (HBASE-10271) [regression] Cannot use the wildcard address since HBASE-9593

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10271:
--

Fix Version/s: (was: 0.94.16)

> [regression] Cannot use the wildcard address since HBASE-9593
> -
>
> Key: HBASE-10271
> URL: https://issues.apache.org/jira/browse/HBASE-10271
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.94.13, 0.96.1
>Reporter: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: HBASE-10271.patch
>
>
> HBASE-9593 moved the creation of the ephemeral znode earlier in the region 
> server startup process such that we don't have access to the ServerName from 
> the Master's POV. HRS.getMyEphemeralNodePath() calls HRS.getServerName() 
> which at that point will return this.isa.getHostName(). If you set 
> hbase.regionserver.ipc.address to 0.0.0.0, you will create a znode with that 
> address.
> What happens next is that the RS will report for duty correctly but the 
> master will do this:
> {noformat}
> 2014-01-02 11:45:49,498 INFO  [master:172.21.3.117:6] 
> master.ServerManager: Registering server=0:0:0:0:0:0:0:0%0,60020,1388691892014
> 2014-01-02 11:45:49,498 INFO  [master:172.21.3.117:6] master.HMaster: 
> Registered server found up in zk but who has not yet reported in: 
> 0:0:0:0:0:0:0:0%0,60020,1388691892014
> {noformat}
> The cluster is then unusable.
> I think a better solution is to track the heartbeats for the region servers 
> and expire those that haven't checked-in for some time. The 0.89-fb branch 
> has this concept, and they also use it to detect rack failures: 
> https://github.com/apache/hbase/blob/0.89-fb/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L1224.
>  In this jira's scope I would just add the heartbeat tracking and add a unit 
> test for the wildcard address.
> What do you think [~rajesh23]?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10130) TestSplitLogManager#testTaskResigned fails sometimes

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863791#comment-13863791
 ] 

Hudson commented on HBASE-10130:


SUCCESS: Integrated in HBase-0.98 #61 (See 
[https://builds.apache.org/job/HBase-0.98/61/])
HBASE-10130. TestSplitLogManager#testTaskResigned fails sometimes (Ted Yu) 
(apurtell: rev 1556051)
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java


> TestSplitLogManager#testTaskResigned fails sometimes
> 
>
> Key: HBASE-10130
> URL: https://issues.apache.org/jira/browse/HBASE-10130
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10130-output.txt, 10130-v1.txt, 10130-v2.txt
>
>
> The test failed in 
> https://builds.apache.org/job/PreCommit-HBASE-Build/8131//testReport
> For testTaskResigned() :
> {code}
> int version = ZKUtil.checkExists(zkw, tasknode);
> // Could be small race here.
> if (tot_mgr_resubmit.get() == 0) waitForCounter(tot_mgr_resubmit, 0, 1, 
> to/2);
> {code}
> There was no log similar to the following (corresponding to waitForCounter() 
> call above):
> {code}
> 2013-12-10 21:23:54,905 INFO  [main] hbase.Waiter(174): Waiting up to [3,200] 
> milli-secs(wait.for.ratio=[1])
> {code}
> Meaning, the version (2) retrieved corresponded to resubmitted task. version1 
> retrieved same value, leading to assertion failure.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863783#comment-13863783
 ] 

Hadoop QA commented on HBASE-10287:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621714/10287.txt
  against trunk revision .
  ATTACHMENT ID: 12621714

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8351//console

This message is automatically generated.

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863782#comment-13863782
 ] 

Hadoop QA commented on HBASE-10274:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12621734/HBASE-10274-0.94-v2.patch
  against trunk revision .
  ATTACHMENT ID: 12621734

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8352//console

This message is automatically generated.

> MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
> ---
>
> Key: HBASE-10274
> URL: https://issues.apache.org/jira/browse/HBASE-10274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-0.94-v2.patch, 
> HBASE-10274-truck-v1.patch, HBASE-10274-truck-v2.patch
>
>
> HBASE-6820 points out the problem but not fix completely.
> killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
> shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-06 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HBASE-10274:
--

Attachment: HBASE-10274-0.94-v2.patch
HBASE-10274-truck-v2.patch

> MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
> ---
>
> Key: HBASE-10274
> URL: https://issues.apache.org/jira/browse/HBASE-10274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-0.94-v2.patch, 
> HBASE-10274-truck-v1.patch, HBASE-10274-truck-v2.patch
>
>
> HBASE-6820 points out the problem but not fix completely.
> killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
> shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-6104) Require EXEC permission to call coprocessor endpoints

2014-01-06 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-6104:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and 0.98

> Require EXEC permission to call coprocessor endpoints
> -
>
> Key: HBASE-6104
> URL: https://issues.apache.org/jira/browse/HBASE-6104
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6104-addendum-1.patch, 6104-revert.patch, 6104.patch, 
> 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch
>
>
> The EXEC action currently exists as only a placeholder in access control.  It 
> should really be used to enforce access to coprocessor endpoint RPC calls, 
> which are currently unrestricted.
> How the ACLs to support this would be modeled deserves some discussion:
> * Should access be scoped to a specific table and CoprocessorProtocol 
> extension?
> * Should it be possible to grant access to a CoprocessorProtocol 
> implementation globally (regardless of table)?
> * Are per-method restrictions necessary?
> * Should we expose hooks available to endpoint implementors so that they 
> could additionally apply their own permission checks? Some CP endpoints may 
> want to require READ permissions, others may want to enforce WRITE, or READ + 
> WRITE.
> To apply these kinds of checks we would also have to extend the 
> RegionObserver interface to provide hooks wrapping HRegion.exec().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863767#comment-13863767
 ] 

Hudson commented on HBASE-9593:
---

ABORTED: Integrated in HBase-0.94-JDK7 #21 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/21/])
HBASE-10286 Revert HBASE-9593, breaks RS wildcard addresses (larsh: rev 1556061)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.handler.Met

[jira] [Commented] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863766#comment-13863766
 ] 

Hudson commented on HBASE-10286:


ABORTED: Integrated in HBase-0.94-JDK7 #21 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/21/])
HBASE-10286 Revert HBASE-9593, breaks RS wildcard addresses (larsh: rev 1556061)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.16
>
> Attachments: 10286-0.94.txt
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-06 Thread chendihao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863764#comment-13863764
 ] 

chendihao commented on HBASE-10274:
---

bq. For the killOneBackupZooKeeperServer(), I think you are closing the the 
ZKDatabase for the active server instead of the backupZkServer
My mistake. Fix it by uploading v2 patch and thanks for reviewing.

bq. Do you need to have this patch for 0.94?
I think it's better to fix it because our codebase is 0.94.

> MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
> ---
>
> Key: HBASE-10274
> URL: https://issues.apache.org/jira/browse/HBASE-10274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-truck-v1.patch
>
>
> HBASE-6820 points out the problem but not fix completely.
> killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
> shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863745#comment-13863745
 ] 

Hudson commented on HBASE-9593:
---

FAILURE: Integrated in HBase-0.94 #1254 (See 
[https://builds.apache.org/job/HBase-0.94/1254/])
HBASE-10286 Revert HBASE-9593, breaks RS wildcard addresses (larsh: rev 1556061)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServe

[jira] [Commented] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863744#comment-13863744
 ] 

Hudson commented on HBASE-10286:


FAILURE: Integrated in HBase-0.94 #1254 (See 
[https://builds.apache.org/job/HBase-0.94/1254/])
HBASE-10286 Revert HBASE-9593, breaks RS wildcard addresses (larsh: rev 1556061)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.16
>
> Attachments: 10286-0.94.txt
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10272) Cluster becomes nonoperational if the node hosting the active Master AND ROOT/META table goes offline

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863736#comment-13863736
 ] 

Hudson commented on HBASE-10272:


SUCCESS: Integrated in hbase-0.96-hadoop2 #170 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/170/])
HBASE-10272 Cluster becomes nonoperational if the node hosting the active 
Master AND ROOT/META table goes offline (Aditya Kishore) (larsh: rev 1556015)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java


> Cluster becomes nonoperational if the node hosting the active Master AND 
> ROOT/META table goes offline
> -
>
> Key: HBASE-10272
> URL: https://issues.apache.org/jira/browse/HBASE-10272
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.96.1, 0.94.15
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Critical
> Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0
>
> Attachments: HBASE-10272.patch, HBASE-10272_0.94.patch
>
>
> Since HBASE-6364, HBase client caches a connection failure to a server and 
> any subsequent attempt to connect to the server throws a 
> {{FailedServerException}}
> Now if a node which hosted the active Master AND ROOT/META table goes 
> offline, the newly anointed Master's initial attempt to connect to the dead 
> region server will fail with {{NoRouteToHostException}} which it handles but 
> since on second attempt crashes with {{FailedServerException}}
> Here is the log from one such occurance
> {noformat}
> 2013-11-20 10:58:00,161 FATAL org.apache.hadoop.hbase.master.HMaster: Master 
> server abort: loaded coprocessors are: []
> 2013-11-20 10:58:00,161 FATAL org.apache.hadoop.hbase.master.HMaster: 
> Unhandled exception. Starting shutdown.
> org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
> in the failed servers list: xxx02/192.168.1.102:60020
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:425)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
> at $Proxy9.getProtocolVersion(Unknown Source)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
> at 
> org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1335)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1294)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1281)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.getCachedConnection(CatalogTracker.java:506)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:383)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:445)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnection(CatalogTracker.java:464)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.verifyMetaRegionLocation(CatalogTracker.java:624)
> at 
> org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:684)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:560)
> at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:376)
> at java.lang.Thread.run(Thread.java:662)
> 2013-11-20 10:58:00,162 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
> 2013-11-20 10:58:00,162 INFO org.apache.hadoop.ipc.HBaseServer: Stopping 
> server on 6
> {noformat}
> Each of the backup master will crash with same error and restarting them will 
> have the same effect. Once this happens, the cluster will remain 
> in-operational until the node with region server is brought online (or the 
> Zookeeper node containing the root region server and/or META entry from the 
> ROOT table is deleted).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863737#comment-13863737
 ] 

Hudson commented on HBASE-10078:


SUCCESS: Integrated in hbase-0.96-hadoop2 #170 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/170/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556026)
* 
/hbase/branches/0.96/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte opByt

[jira] [Commented] (HBASE-10284) Build broken with svn 1.8

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863735#comment-13863735
 ] 

Hudson commented on HBASE-10284:


SUCCESS: Integrated in hbase-0.96-hadoop2 #170 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/170/])
HBASE-10284 Build broken with svn 1.8 (larsh: rev 1555964)
* /hbase/branches/0.96/hbase-common/src/saveVersion.sh


> Build broken with svn 1.8
> -
>
> Key: HBASE-10284
> URL: https://issues.apache.org/jira/browse/HBASE-10284
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0
>
> Attachments: 10284.txt
>
>
> Just upgraded my machine and found that {{svn info}} displays a "Relative 
> URL:" line in svn 1.8.
> saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9426) Make custom distributed barrier procedure pluggable

2014-01-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863720#comment-13863720
 ] 

Ted Yu commented on HBASE-9426:
---

The long lines came from protobuf generated code.

> Make custom distributed barrier procedure pluggable 
> 
>
> Key: HBASE-9426
> URL: https://issues.apache.org/jira/browse/HBASE-9426
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.2, 0.94.11
>Reporter: Richard Ding
>Assignee: Richard Ding
> Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, 
> HBASE-9426-6.patch, HBASE-9426.patch.1, HBASE-9426.patch.2, HBASE-9426.patch.3
>
>
> Currently if one wants to implement a custom distributed barrier procedure 
> (e.g., distributed log roll or distributed table flush), the HBase core code 
> needs to be modified in order for the procedure to work.
> Looking into the snapshot code (especially on region server side), most of 
> the code to enable the procedure are generic life-cycle management (i.e., 
> init, start, stop). We can make this part pluggable.
> Here is the proposal. Following the coprocessor example, we define two 
> properties:
> {code}
> hbase.procedure.regionserver.classes
> hbase.procedure.master.classes
> {code}
> The values for both are comma delimited list of classes. On region server 
> side, the classes implements the following interface:
> {code}
> public interface RegionServerProcedureManager {
>   public void initialize(RegionServerServices rss) throws KeeperException;
>   public void start();
>   public void stop(boolean force) throws IOException;
>   public String getProcedureName();
> }
> {code}
> While on Master side, the classes implement the interface:
> {code}
> public interface MasterProcedureManager {
>   public void initialize(MasterServices master) throws KeeperException, 
> IOException, UnsupportedOperationException;
>   public void stop(String why);
>   public String getProcedureName();
>   public void execProcedure(ProcedureDescription desc) throws IOException;
>   IOException;
> }
> {code}
> Where the ProcedureDescription is defined as
> {code}
> message ProcedureDescription {
>   required string name = 1;
>   required string instance = 2;
>   optional int64 creationTime = 3 [default = 0];
>   message Property {
> required string tag = 1;
> optional string value = 2;
>   }
>   repeated Property props = 4;
> }
> {code}
> A generic API can be defined on HMaster to trigger a procedure:
> {code}
> public boolean execProcedure(ProcedureDescription desc) throws IOException;
> {code}
> _SnapshotManager_ and _RegionServerSnapshotManager_ are special examples of 
> _MasterProcedureManager_ and _RegionServerProcedureManager_. They will be 
> automatically included (users don't need to specify them in the conf file).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863714#comment-13863714
 ] 

Enis Soztutar commented on HBASE-10285:
---

bq. This works quite a bit differently in trunk. What would you recommend there?
Ok, nevermind. I did not release that this change only applies to CM being run 
from the command line. In trunk, with the CM refactor, we no longer have that 
ability (we should fix that). 
+1 to the patch. 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863704#comment-13863704
 ] 

Enis Soztutar commented on HBASE-10274:
---

For the killOneBackupZooKeeperServer(), I think you are closing the the 
ZKDatabase for the active server instead of the backupZkServer: 
{code}
+zooKeeperServers.get(activeZKServerIndex).getZKDatabase().close();
+
 // remove this backup zk server
 standaloneServerFactoryList.remove(backupZKServerIndex);
{code} 



> MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
> ---
>
> Key: HBASE-10274
> URL: https://issues.apache.org/jira/browse/HBASE-10274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-truck-v1.patch
>
>
> HBASE-6820 points out the problem but not fix completely.
> killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
> shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863701#comment-13863701
 ] 

Cody Marcel commented on HBASE-10285:
-

This works quite a bit differently in trunk. What would you recommend there?

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863691#comment-13863691
 ] 

Enis Soztutar commented on HBASE-10285:
---

We would also need a trunk patch as well. 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863690#comment-13863690
 ] 

Enis Soztutar commented on HBASE-10285:
---

bq. Hmm, the option on trunk is different, it's -m/-monkey
-monkey is the wrong approach. We should have a single monkey implementation 
with pluggable policies rather than pluggable monkey. But changing it in this 
patch might be an overkill. 
My only concern is that, maybe we should rename -policy to -chaospolicy, to be 
more explicit. Other than +1. 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10287:
---

Status: Patch Available  (was: Open)

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863672#comment-13863672
 ] 

Hudson commented on HBASE-10286:


SUCCESS: Integrated in HBase-0.94-security #382 (See 
[https://builds.apache.org/job/HBase-0.94-security/382/])
HBASE-10286 Revert HBASE-9593, breaks RS wildcard addresses (larsh: rev 1556061)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.16
>
> Attachments: 10286-0.94.txt
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863673#comment-13863673
 ] 

Hudson commented on HBASE-9593:
---

SUCCESS: Integrated in HBase-0.94-security #382 (See 
[https://builds.apache.org/job/HBase-0.94-security/382/])
HBASE-10286 Revert HBASE-9593, breaks RS wildcard addresses (larsh: rev 1556061)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java


> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.h

[jira] [Updated] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10287:
---

Attachment: 10287.txt

> HRegionServer#addResult() should check whether rpcc is null
> ---
>
> Key: HBASE-10287
> URL: https://issues.apache.org/jira/browse/HBASE-10287
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 10287.txt
>
>
> HRegionServer#addResult() is called by HRegionServer#mutate() where 
> controller parameter could be null.
> HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10287) HRegionServer#addResult() should check whether rpcc is null

2014-01-06 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10287:
--

 Summary: HRegionServer#addResult() should check whether rpcc is 
null
 Key: HBASE-10287
 URL: https://issues.apache.org/jira/browse/HBASE-10287
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


HRegionServer#addResult() is called by HRegionServer#mutate() where controller 
parameter could be null.

HRegionServer#addResult() should check whether rpcc is null.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6104) Require EXEC permission to call coprocessor endpoints

2014-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863663#comment-13863663
 ] 

Hadoop QA commented on HBASE-6104:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621697/6104.patch
  against trunk revision .
  ATTACHMENT ID: 12621697

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8350//console

This message is automatically generated.

> Require EXEC permission to call coprocessor endpoints
> -
>
> Key: HBASE-6104
> URL: https://issues.apache.org/jira/browse/HBASE-6104
> Project: HBase
>  Issue Type: New Feature
>  Components: Coprocessors, security
>Reporter: Gary Helmling
>Assignee: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6104-addendum-1.patch, 6104-revert.patch, 6104.patch, 
> 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch, 6104.patch
>
>
> The EXEC action currently exists as only a placeholder in access control.  It 
> should really be used to enforce access to coprocessor endpoint RPC calls, 
> which are currently unrestricted.
> How the ACLs to support this would be modeled deserves some discussion:
> * Should access be scoped to a specific table and CoprocessorProtocol 
> extension?
> * Should it be possible to grant access to a CoprocessorProtocol 
> implementation globally (regardless of table)?
> * Are per-method restrictions necessary?
> * Should we expose hooks available to endpoint implementors so that they 
> could additionally apply their own permission checks? Some CP endpoints may 
> want to require READ permissions, others may want to enforce WRITE, or READ + 
> WRITE.
> To apply these kinds of checks we would also have to extend the 
> RegionObserver interface to provide hooks wrapping HRegion.exec().



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863656#comment-13863656
 ] 

Hudson commented on HBASE-10078:


FAILURE: Integrated in hbase-0.96 #251 (See 
[https://builds.apache.org/job/hbase-0.96/251/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556026)
* 
/hbase/branches/0.96/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte opByte = in.readByte(

[jira] [Commented] (HBASE-10272) Cluster becomes nonoperational if the node hosting the active Master AND ROOT/META table goes offline

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863655#comment-13863655
 ] 

Hudson commented on HBASE-10272:


FAILURE: Integrated in hbase-0.96 #251 (See 
[https://builds.apache.org/job/hbase-0.96/251/])
HBASE-10272 Cluster becomes nonoperational if the node hosting the active 
Master AND ROOT/META table goes offline (Aditya Kishore) (larsh: rev 1556015)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java


> Cluster becomes nonoperational if the node hosting the active Master AND 
> ROOT/META table goes offline
> -
>
> Key: HBASE-10272
> URL: https://issues.apache.org/jira/browse/HBASE-10272
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Affects Versions: 0.96.1, 0.94.15
>Reporter: Aditya Kishore
>Assignee: Aditya Kishore
>Priority: Critical
> Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0
>
> Attachments: HBASE-10272.patch, HBASE-10272_0.94.patch
>
>
> Since HBASE-6364, HBase client caches a connection failure to a server and 
> any subsequent attempt to connect to the server throws a 
> {{FailedServerException}}
> Now if a node which hosted the active Master AND ROOT/META table goes 
> offline, the newly anointed Master's initial attempt to connect to the dead 
> region server will fail with {{NoRouteToHostException}} which it handles but 
> since on second attempt crashes with {{FailedServerException}}
> Here is the log from one such occurance
> {noformat}
> 2013-11-20 10:58:00,161 FATAL org.apache.hadoop.hbase.master.HMaster: Master 
> server abort: loaded coprocessors are: []
> 2013-11-20 10:58:00,161 FATAL org.apache.hadoop.hbase.master.HMaster: 
> Unhandled exception. Starting shutdown.
> org.apache.hadoop.hbase.ipc.HBaseClient$FailedServerException: This server is 
> in the failed servers list: xxx02/192.168.1.102:60020
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:425)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
> at $Proxy9.getProtocolVersion(Unknown Source)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
> at 
> org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1335)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1294)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1281)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.getCachedConnection(CatalogTracker.java:506)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:383)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMeta(CatalogTracker.java:445)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnection(CatalogTracker.java:464)
> at 
> org.apache.hadoop.hbase.catalog.CatalogTracker.verifyMetaRegionLocation(CatalogTracker.java:624)
> at 
> org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:684)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:560)
> at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:376)
> at java.lang.Thread.run(Thread.java:662)
> 2013-11-20 10:58:00,162 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
> 2013-11-20 10:58:00,162 INFO org.apache.hadoop.ipc.HBaseServer: Stopping 
> server on 6
> {noformat}
> Each of the backup master will crash with same error and restarting them will 
> have the same effect. Once this happens, the cluster will remain 
> in-operational until the node with region server is brought online (or the 
> Zookeeper node containing the root region server and/or META entry from the 
> ROOT table is deleted).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863653#comment-13863653
 ] 

Enis Soztutar commented on HBASE-10274:
---

bq. BTW, the patch of HBASE-6820 is not committed in 0.94.
No it is not. Do you need to have this patch for 0.94? In that case we have to 
open another issue for backporting that to 0.94. 


> MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
> ---
>
> Key: HBASE-10274
> URL: https://issues.apache.org/jira/browse/HBASE-10274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-truck-v1.patch
>
>
> HBASE-6820 points out the problem but not fix completely.
> killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
> shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863654#comment-13863654
 ] 

Sergey Shelukhin commented on HBASE-10241:
--

That would be PITA from backward compat perspective - we'd both add a field, 
requiring HFileFormat v4 (don't really want tag overhead for this), and 
presumably (tags or not) remove the old magic mechanism

> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863651#comment-13863651
 ] 

Enis Soztutar commented on HBASE-10241:
---

bq. Mvcc can already be serialized with KV in HFile. Comment in KeyValue.java 
is a lie
Sorry, I am not talking about mvcc serialization in hfile. I was talking about 
making mvcc number a part of the byte[] in KV. 

> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863626#comment-13863626
 ] 

Lars Hofhansl commented on HBASE-9593:
--

This is a code change between releases, though. Doesn't matter whether that 
change was applied with patch -R or via Eclipse :)

I'm also generating release notes from jira, in order to do that it needs an 
entry.
In any case I created HBASE-10286 for the 0.94 revert. If you like you can tag 
that with 0.98 if you want a record.


> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.process(MetaServer

[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863624#comment-13863624
 ] 

Sergey Shelukhin commented on HBASE-10285:
--

Hmm, the option on trunk is different, it's -m/-monkey. I guess it's also run 
differently in 94 and 96 after the refactor, so I guess it should be ok. +0 
from me given Lars' +1 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863623#comment-13863623
 ] 

Hudson commented on HBASE-10078:


SUCCESS: Integrated in HBase-0.98 #60 (See 
[https://builds.apache.org/job/HBase-0.98/60/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556025)
* 
/hbase/branches/0.98/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte opByte = in.readByte();

[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863618#comment-13863618
 ] 

Hudson commented on HBASE-10078:


ABORTED: Integrated in HBase-0.94-JDK7 #20 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/20/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556027)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/Classes.java
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails 

[jira] [Commented] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863615#comment-13863615
 ] 

Sergey Shelukhin commented on HBASE-10241:
--

There's another issue,  HBASE-10227 for the WAL stuff.
Mvcc can already be serialized with KV in HFile. Comment in KeyValue.java is a 
lie :)

> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10285:
--

Attachment: HBASE-10285.txt

Reattached as .txt, +1 from me.

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285, HBASE-10285.txt
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8889) TestIOFencing#testFencingAroundCompaction occasionally fails

2014-01-06 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863612#comment-13863612
 ] 

Ted Yu commented on HBASE-8889:
---

>From the log:
{code}
2013-12-28 03:13:53,404 DEBUG [pool-1-thread-1] 
hbase.TestIOFencing$CompactionBlockerRegion(103): allowing compactions
...
2013-12-28 03:13:53,413 DEBUG 
[RS:0;asf002:54266-shortCompactions-1388200422935] hdfs.DFSInputStream(1095): 
Error making BlockReader. Closing stale   
NioInetPeer(Socket[addr=/127.0.0.1,port=57329,localport=55235])
java.io.EOFException: Premature EOF: no length prefix available
  at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
  at 
org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:392)
  at 
org.apache.hadoop.hdfs.BlockReaderFactory.newBlockReader(BlockReaderFactory.java:131)
  at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1088)
  at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:533)
  at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
  at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
  at java.io.DataInputStream.read(DataInputStream.java:132)
  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
  at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1210)
  at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1483)
  at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
  at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
  at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:765)
  at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:245)
  at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:153)
  at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:319)
  at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:242)
  at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:202)
  at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:257)
  at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
  at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109)
  at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1074)
  at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1378)
  at 
org.apache.hadoop.hbase.TestIOFencing$CompactionBlockerRegion.compact(TestIOFencing.java:118)
{code}
There was an EOFException seeking the scanner.
{code}
public boolean compact(CompactionContext compaction, Store store) throws 
IOException {
  try {
return super.compact(compaction, store);
  } finally {
compactCount++;
  }
}
{code}
However, the compactCount was incremented in the finally block, leading to 
premature exit from the following loop:
{code}
  while (compactingRegion.compactCount == 0) {
Thread.sleep(1000);
  }
{code}

> TestIOFencing#testFencingAroundCompaction occasionally fails
> 
>
> Key: HBASE-8889
> URL: https://issues.apache.org/jira/browse/HBASE-8889
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
> Attachments: TestIOFencing.tar.gz
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/6232//testReport/org.apache.hadoop.hbase/TestIOFencing/testFencingAroundCompaction/
>  :
> {code}
> java.lang.AssertionError: Timed out waiting for new server to open region
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hbase.TestIOFencing.doTest(TestIOFencing.java:269)
>   at 
> org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:205)
> {code}
> {code}
> 2013-07-06 23:13:53,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
> Waiting for the new server to pick up the region 
> tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
> 2013-07-06 23:13:54,120 INFO  [pool-1-thread-1] hbase.TestIOFencing(266): 
> Waiting for the new server to pick up the region 
> tabletest,,1373152125442.6e62d3b24ea23160931362b60359ff03.
> 2013-07-06 23:13:55,121 DEBUG [pool-1-thread-1] 
> hbase.TestIOFencing$CompactionBlockerRegion(102): allowing compactions
> 2013-07-06 23:13:55,121 INFO  [pool-1-thread-1] 
> hbase.HBaseTestingUtility(911): Shutting down minicluster
> 2013-07-06 23:13:55,121 DE

[jira] [Updated] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10286:
--

Attachment: 10286-0.94.txt

Patch for 0.94, reverts part of HBASE-9842.

> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.16
>
> Attachments: 10286-0.94.txt
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10286:
--


Committed to 0.94.
[~apurtell], [~stack]

> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.16
>
> Attachments: 10286-0.94.txt
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-10286.
---

Resolution: Fixed
  Assignee: Lars Hofhansl

> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.94.16
>
> Attachments: 10286-0.94.txt
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863593#comment-13863593
 ] 

Andrew Purtell edited comment on HBASE-9593 at 1/6/14 11:11 PM:


A revert is just an application of this patch with -R. I commented here and on 
HBASE-10271

Edit: ... for 0.98. For released versions, RMs could create a new JIRA. I don't 
think that's necessary, a SVN commit starting with "Revert HBASE-9593..." would 
work (IMO).


was (Author: apurtell):
A revert is just an application of this patch with -R. I commented here and on 
HBASE-10271

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.jav

[jira] [Updated] (HBASE-10286) Revert HBASE-9593, breaks RS wildcard addresses

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10286:
--

Summary: Revert HBASE-9593, breaks RS wildcard addresses  (was: Revert 
HBASE-9593, breaks wildcard addresses)

> Revert HBASE-9593, breaks RS wildcard addresses
> ---
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.16
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863601#comment-13863601
 ] 

Hudson commented on HBASE-10078:


SUCCESS: Integrated in HBase-TRUNK #4794 (See 
[https://builds.apache.org/job/HBase-TRUNK/4794/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556024)
* 
/hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte opByte = in.readByte();
>

[jira] [Commented] (HBASE-10284) Build broken with svn 1.8

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863597#comment-13863597
 ] 

Hudson commented on HBASE-10284:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #54 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/54/])
HBASE-10284 Build broken with svn 1.8 (larsh: rev 1555963)
* /hbase/branches/0.98/hbase-common/src/saveVersion.sh


> Build broken with svn 1.8
> -
>
> Key: HBASE-10284
> URL: https://issues.apache.org/jira/browse/HBASE-10284
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0
>
> Attachments: 10284.txt
>
>
> Just upgraded my machine and found that {{svn info}} displays a "Relative 
> URL:" line in svn 1.8.
> saveVersion.sh does not deal with that correctly.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863595#comment-13863595
 ] 

Jimmy Xiang commented on HBASE-10078:
-

The fix went into 0.94 only. Also added some tests for other branches to cover 
FilterList, no other code change.

> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte opByte = in.readByte();
> operator = Operator.values()[opByte];
> int size = in.readInt();
> if (size > 0) {
>   filters = new ArrayList(size);
>   for (int i = 0; i < si

[jira] [Created] (HBASE-10286) Revert HBASE-9593, breaks wildcard addresses

2014-01-06 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-10286:
-

 Summary: Revert HBASE-9593, breaks wildcard addresses
 Key: HBASE-10286
 URL: https://issues.apache.org/jira/browse/HBASE-10286
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
 Fix For: 0.94.16






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863593#comment-13863593
 ] 

Andrew Purtell commented on HBASE-9593:
---

A revert is just an application of this patch with -R. I commented here and on 
HBASE-10271

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.process(MetaServerShutdownHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.conc

[jira] [Updated] (HBASE-10286) Revert HBASE-9593, breaks wildcard addresses

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10286:
--

Description: 
See discussion on HBASE-10271.
This breaks regionserver wildcard bind addresses.

> Revert HBASE-9593, breaks wildcard addresses
> 
>
> Key: HBASE-10286
> URL: https://issues.apache.org/jira/browse/HBASE-10286
> Project: HBase
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Fix For: 0.94.16
>
>
> See discussion on HBASE-10271.
> This breaks regionserver wildcard bind addresses.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863592#comment-13863592
 ] 

Enis Soztutar commented on HBASE-10285:
---

Cody can you re-attach this as a patch. Currently it is .html. 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9593) Region server left in online servers list forever if it went down after registering to master and before creating ephemeral node

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863589#comment-13863589
 ] 

Lars Hofhansl commented on HBASE-9593:
--

Since there are released versions with this, I think we need to have a separate 
jira for the revert.

> Region server left in online servers list forever if it went down after 
> registering to master and before creating ephemeral node
> 
>
> Key: HBASE-9593
> URL: https://issues.apache.org/jira/browse/HBASE-9593
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.94.11
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.96.1
>
> Attachments: 9593-0.94.txt, HBASE-9593.patch, HBASE-9593_v2.patch, 
> HBASE-9593_v3.patch
>
>
> In some of our tests we found that regionserer always showing online in 
> master UI but its actually dead.
> If region server went down in the middle following steps then the region 
> server always showing in master online servers list.
> 1) register to master
> 2) create  ephemeral znode
> Since no notification from zookeeper, master is not removing the expired 
> server from online servers list.
> Assignments will fail if the RS is selected as destination server.
> Some cases ROOT or META also wont be assigned if the RS is randomly selected 
> every time need to wait for timeout.
> Here are the logs:
> 1) HOST-10-18-40-153 is registered to master
> {code}
> 2013-09-19 19:47:41,123 DEBUG org.apache.hadoop.hbase.master.ServerManager: 
> STARTUP: Server HOST-10-18-40-153,61020,1379600260255 came back up, removed 
> it from the dead servers list
> 2013-09-19 19:47:41,123 INFO org.apache.hadoop.hbase.master.ServerManager: 
> Registering server=HOST-10-18-40-153,61020,1379600260255
> {code}
> {code}
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to master at 
> HOST-10-18-40-153/10.18.40.153:61000
> 2013-09-19 19:47:41,119 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 
> HOST-10-18-40-153,61000,1379600055284 that we are up with port=61020, 
> startcode=1379600260255
> {code}
> 2) Terminated before creating ephemeral node.
> {code}
> Thu Sep 19 19:47:41 IST 2013 Terminating regionserver
> {code}
> 3) The RS can be selected for assignment and they will fail.
> {code}
> 2013-09-19 19:47:54,049 WARN 
> org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
> -ROOT-,,0.70236052 to HOST-10-18-40-153,61020,1379600260255, trying to assign 
> elsewhere instead; retry=0
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:436)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1127)
>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>   at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>   at $Proxy15.openRegion(Unknown Source)
>   at 
> org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:533)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1734)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1431)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1406)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1401)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2374)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRoot(MetaServerShutdownHandler.java:136)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.verifyAndAssignRootWithRetries(MetaServerShutdownHandler.java:160)
>   at 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.process(MetaServerShutdownHandler.java:82)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.u

[jira] [Commented] (HBASE-10241) implement mvcc-consistent scanners (across recovery)

2014-01-06 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863590#comment-13863590
 ] 

Enis Soztutar commented on HBASE-10241:
---

bq. but the plan was that I will do 1 and 3, and then take 2 if the other JIRA 
that does 2 is not done by then. 
Sounds good. I thought HBASE-8721 is won't fix. 
bq. HBASE-8763 does not need to block this, it's probably bigger than this 
entire JIRA
Indeed. But it will be a shame if we add mvcc's to WAL only to remove them 
again after HBASE-8763.

BTW, I think we also have to handle mvcc / seqId as a part of the serialization 
in the KV byte array. Do we have any open issues for that? 




> implement mvcc-consistent scanners (across recovery)
> 
>
> Key: HBASE-10241
> URL: https://issues.apache.org/jira/browse/HBASE-10241
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile, regionserver, Scanners
>Affects Versions: 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: Consistent scanners.pdf
>
>
> Scanners currently use mvcc for consistency. However, mvcc is lost on server 
> restart, or even a region move. This JIRA is to enable the scanners to 
> transfer mvcc (or seqId, or some other number, see HBASE-8763) between 
> servers. First, client scanner needs to get and store the readpoint. Second, 
> mvcc needs to be preserved in WAL. Third, the mvcc needs to be stored in 
> store files per KV and discarded when not needed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10271) [regression] Cannot use the wildcard address since HBASE-9593

2014-01-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863583#comment-13863583
 ] 

Lars Hofhansl commented on HBASE-10271:
---

I'll do the same from 0.94. I'd prefer not to introduce another heartbeat 
mechanism over what we have from ZK. (we used to have heartbeats and then 
removed them in favor of ZK before my time, right?)

> [regression] Cannot use the wildcard address since HBASE-9593
> -
>
> Key: HBASE-10271
> URL: https://issues.apache.org/jira/browse/HBASE-10271
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.94.13, 0.96.1
>Reporter: Jean-Daniel Cryans
>Priority: Critical
> Fix For: 0.98.0, 0.94.16, 0.96.2, 0.99.0
>
> Attachments: HBASE-10271.patch
>
>
> HBASE-9593 moved the creation of the ephemeral znode earlier in the region 
> server startup process such that we don't have access to the ServerName from 
> the Master's POV. HRS.getMyEphemeralNodePath() calls HRS.getServerName() 
> which at that point will return this.isa.getHostName(). If you set 
> hbase.regionserver.ipc.address to 0.0.0.0, you will create a znode with that 
> address.
> What happens next is that the RS will report for duty correctly but the 
> master will do this:
> {noformat}
> 2014-01-02 11:45:49,498 INFO  [master:172.21.3.117:6] 
> master.ServerManager: Registering server=0:0:0:0:0:0:0:0%0,60020,1388691892014
> 2014-01-02 11:45:49,498 INFO  [master:172.21.3.117:6] master.HMaster: 
> Registered server found up in zk but who has not yet reported in: 
> 0:0:0:0:0:0:0:0%0,60020,1388691892014
> {noformat}
> The cluster is then unusable.
> I think a better solution is to track the heartbeats for the region servers 
> and expire those that haven't checked-in for some time. The 0.89-fb branch 
> has this concept, and they also use it to detect rack failures: 
> https://github.com/apache/hbase/blob/0.89-fb/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java#L1224.
>  In this jira's scope I would just add the heartbeat tracking and add a unit 
> test for the wildcard address.
> What do you think [~rajesh23]?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863580#comment-13863580
 ] 

Hudson commented on HBASE-10078:


FAILURE: Integrated in HBase-0.94 #1253 (See 
[https://builds.apache.org/job/HBase-0.94/1253/])
HBASE-10078 Dynamic Filter - Not using DynamicClassLoader when using FilterList 
(jxiang: rev 1556027)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
* /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/Classes.java
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestGet.java


> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails withou

[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863573#comment-13863573
 ] 

Cody Marcel commented on HBASE-10285:
-

Including [~enis] [~jesse_yates]

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10078) Dynamic Filter - Not using DynamicClassLoader when using FilterList

2014-01-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10078:
--

Fix Version/s: (was: 0.96.2)
   (was: 0.98.0)

NP :)

This change only went into 0.94, right? (removed the other fix tags)

> Dynamic Filter - Not using DynamicClassLoader when using FilterList
> ---
>
> Key: HBASE-10078
> URL: https://issues.apache.org/jira/browse/HBASE-10078
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.13
>Reporter: Federico Gaule
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: 0.94-10078.patch, 0.94-10078_v2.patch, hbase-10078.patch
>
>
> I've tried to use dynamic jar load 
> (https://issues.apache.org/jira/browse/HBASE-1936) but seems to have an issue 
> with FilterList. 
> Here is some log from my app where i send a Get with a FilterList containing 
> AFilter and other with BFilter.
> {noformat}
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Class d.p.AFilter not found 
> - using dynamical class loader
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class: d.p.AFilter
> 2013-12-02 13:55:42,564 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Loading new jar files, if any
> 2013-12-02 13:55:42,677 DEBUG 
> org.apache.hadoop.hbase.util.DynamicClassLoader: Finding class again: 
> d.p.AFilter
> 2013-12-02 13:55:43,004 ERROR org.apache.hadoop.hbase.io.HbaseObjectWritable: 
> Can't find class d.p.BFilter
> java.lang.ClassNotFoundException: d.p.BFilter
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:247)
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:820)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.getClassByName(HbaseObjectWritable.java:792)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:679)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.readFields(FilterList.java:324)
>   at org.apache.hadoop.hbase.client.Get.readFields(Get.java:405)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at org.apache.hadoop.hbase.client.Action.readFields(Action.java:101)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:594)
>   at 
> org.apache.hadoop.hbase.client.MultiAction.readFields(MultiAction.java:116)
>   at 
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:690)
>   at 
> org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:126)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1311)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1226)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:748)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:539)
>   at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:514)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}
> AFilter is not found so it tries with DynamicClassLoader, but when it tries 
> to load AFilter, it uses URLClassLoader and fails without checking out for 
> dynamic jars.
> I think the issue is releated to FilterList#readFields
> {code:title=FilterList.java|borderStyle=solid} 
>  public void readFields(final DataInput in) throws IOException {
> byte opByte = in.readByte();
> operator = Operator.values()[opByte];
> int size = in.readInt();
> if (size > 0) {
>   filters = new ArrayList(size);
>   for (int i = 0; i < size; i++) {
>

[jira] [Commented] (HBASE-10285) All for configurable policies in ChaosMonkey

2014-01-06 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863572#comment-13863572
 ] 

Cody Marcel commented on HBASE-10285:
-

the syntax would be "-policy EVERY_MINUTE_RANDOM_ACTION_POLICY" 

> All for configurable policies in ChaosMonkey
> 
>
> Key: HBASE-10285
> URL: https://issues.apache.org/jira/browse/HBASE-10285
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.16
>Reporter: Cody Marcel
>Assignee: Cody Marcel
>Priority: Minor
> Fix For: 0.94.16
>
> Attachments: HBASE-10285
>
>
> For command line runs of ChaosMonkey, we should be able to pass policies. 
> They are currently hard coded to EVERY_MINUTE_RANDOM_ACTION_POLICY. I have 
> made this policy the default, but now if you supply a policy as an option on 
> the command line, it will work.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >